HACKER Q&A
📣 yumraj

What is the, currently, best Programming LLM (copilot) subscription?


Primarily interested in Go, Python, Swift, Android (Java) and Web stuff. So, relatively mainstream languages, nothing too esoteric.

What have people found to be the best copilot paid subscription, at the moment?


  👤 cheema33 Accepted Answer ✓
I have been a big user of ChatGPT-4 ever since it came out. I have used in various forms. Through the chatbot, through GitHub Copilot etc. For the past year, I have looked high and low to see an alternative that performed better. Nothing came close. Until a few days ago with Anthropic released Claude 3. For software development work, it is very very impressive. I've been running GPT-4 and Claude 3 Opus side by side for a few days. And I have now decided to cancel all AI subscriptions except Claude 3.

For software development work, no other model comes close.


👤 SOLAR_FIELDS
OpenAI API key with GPT4 plus aider[1]: https://github.com/paul-gauthier/aider

For reference against the actual product called “Copilot”, I would say this is actually useful, vs Copilot which I would use words like “glorified autocomplete” and “slightly better intellisense” to describe. Only really good for saving you some rote typing periodically.

The primary limit with “aider” is the GPT4 context window, so really it’s about learning to work within that as its only learning curve.

I have been curious about sourcegraph’s “Cody” product if anyone here has tried it


👤 mypalmike
In addition to paid subscriptions, you might want to consider running an LLM locally. One of a number of projects enabling this approach is "continue" - https://github.com/continuedev/continue

👤 vector_rotcev
The AI Assistant in JetBrains is the best I've encountered. In Pycharm, specifically.

I've tried using my OpenAI subscription directly to use GPT4, but copy-pasting between the browser and the lack of context (that GPT has of my code) is limiting.

I've tried Copilot, but it just produces large amounts of junk code, and it gets irritating.

JetBrain's AI Assistant is built into the IDE I use, it melds well with all the pre-existing coding assistance it offers, the conversation interface is useful and can answer questions on specific parts of code (I habitually highlight whatever I'm interested in and ask it about that), and it's good when I ask it to write tests.


👤 drewnick
Phind released a new model I think about a week or two back and I added their VS code plug-in.

I really like it because it makes using GPT 4 or their proprietary model much easier than copying and pasting between a browser and VS code.

It is a paid subscription, but it includes 500 messages per day, which is way better than the time window on ChatGPT for me.


👤 shakabrah
Cursor is really great. It will do codebase search as well as actually apply the code it suggest from the chat interface. Some other neat features like checking your staged changes for bugs.

👤 drdaeman
ChatGPT 4 with manual explanations and code snippets seem to work best so far. IDE-integrated tools have the potential, but are really immature yet.

In particular, I haven't found any tools that can do either of the following:

- Comprehend a large codebase. All assistants seem to have a pretty narrow scope for the context, like the current file or few of its neighbors. E.g. they can write some code that already exists somewhere as a ready-to-use utility function, if this function isn't mentioned anywhere near to the location I'm working on.

- Automate anything. IDE integration is extremely minimal in everything I've seen so far. A copilot can spew a new version of the code piece that can be placed with a click, but that's the extent of it. It otherwise can't replace a human in the loop, even if the work is simple, trivial and highly repetitive. E.g. doing a regex-powered "replace all" still cannot be replaced with a natural language query.

- Do any repetitive work without making mistakes or hallucinating something. E.g. a copilot can try to generate an unit test for me and the overall structure can be quite decent, but I cannot ever be sure it won't miss or invent something if it involves creating some complex object and checking all its fields after some transformation.


👤 meeech
I've been pretty happy with Codeium free offering. it does good context completions for me, and based on context, gives me the next bits I'm about to type.

I used cody at work, and the best feature for that I found was chore work - I could highlight a struct (for example) and give it instructions how to transform it and it did a good job there.


👤 thiodrio
Consider Phind Model: > Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context (https://phind.com)

https://news.ycombinator.com/item?id=38088538


👤 codelikeawolf
If you use JetBrains products, I've been pretty happy with their AI assistant. I feel like it's less disruptive than GitHub Copilot. If I ask it to write some code, it does a pretty decent job, otherwise it stays out of my way.

👤 burkaman
I don't think it's the best assistant by ability, but if you're like me and you bought a lifetime Tabnine subscription for $29 when it debuted on HN in 2018 (https://news.ycombinator.com/item?id=18393364), maybe reinstall it and try it out. I've been using it and I would say it is better than nothing.

👤 bernaferrari
For me, personally, Copilot is the best because you are typing and it is just suggesting. Very often I type something, wait 2 seconds for its answer (knowing it is going to get it right 80% of the time) and press tab. Sure, it misses a ton, but I see it as a great autocorrect that is always there trying to help. Now, copilot chat and everything else are very bad. I don't like them.

If you are "intentional" I usually just use ChatGPT-4 directly. I've had a super pleasant experience with Cursor, too. You can just open Cursor and say "write readme for me", it will scan your project and right a decent readme. It is also eeeeextremely goooood at "I need to know where in the codebase is doing this" and it finds. But for day to day I still prefer chatgpt 4.

So I would say:

- If you want a nice autocorrect, copilot

- If you want something that knows your project and tries to help, cursor

- If you just want something smart, chatgpt with gpt-4.

I would say copilot lets me be 15% faster daily, and gpt-4 about 30% faster (when I need it), but I need it less often - maybe twice per day.


👤 __rito__
Even the free ChatGPT-3.5 beats paid GitHub Copilot in code quality and problem solving.

I used Copilot for a while, and it's no more than very smart autofill and good at generating boilerplate.

ChatGPT actually solves (some, minor) problems. I kept using Copilot because of comfort, and it having the context. But I stopped using it 2-3 months ago, and haven't looked back.


👤 simpaticoder
Was an early adopter of Github Copilot but I quickly turned it off because of the noise. I really like AI for "gap filling" (for example, I used AI extensively to help me write my bash deployment scripts for simpatico.io) and porting tasks, both of which lend themselves to copy/paste into a browser.

👤 geepytee
I was asking myself the same question not so long ago because Github Copilot's bugs kept bothering me and this is what got us to go into YC and build https://double.bot

If what you are looking for a high quality UX + access to the most capable models (today that'd be the GPT-4 killer, Claude 3 Opus), then I think you'll like what we've built.

Double ha similar features to all the other Copilots, code autocomplete, chat, etc but we've put particularly care around getting the small details right (even the smallest thing like making sure our autocomplete closes the brackets appropriately). If you're coming from Github Copilot you should check out this side-by-side comparison: https://docs.double.bot/copilot


👤 ado__dev
I'm biased, but Sourcegraph Cody has changed the way I write code.

https://sourcegraph.com/cody

And we just shipped support for Claude 3 (Opus and Sonnet) as well as using any local LLM via Ollama for completions and chat.


👤 thekevan
This isn't completely answering the question you asked but https://www.blackbox.ai/ was my favorite after testing out some free ones.

I know I liked it better than TabNine, CodeGPT and Codeium in VS Code. I'm pretty sure I tested a few others from the top of list of extensions when you search the extension marketplace, but I forgot which ones.

Blackbox just seems more intuitive in use and had better answers. They don't have many followers on Twitter, I'm surprised more people aren't talking about it.


👤 adt
Have a look at StarCoder 2. And here's my db of 300+ LLMs:

https://lifearchitect.ai/models-table/


👤 westoncb
I'm using supermaven these days. Big enough context to understand my full codebase well (I am often working on relatively small projects though), and it's super fast. They trained a custom model, I believe on edit sequences.

These copilots are way more useful when their knowledge context is cross-file; anything restricted to current active file or making me do work to select some file subset I'm gonna pass on.


👤 IanCal
www.phind.com is what I've found to be the best generally, using searches. Their recent 70B model seems really good.

👤 srameshc
Duet AI from Google. I have initially tried Copilot and it was sometimes slow in responding. Duet I feel is hyper fast and it sometimes suggest elegant solutions, I don't know if it has to do with any data trained on Google's own code base, I felt this particulary with Go code.

👤 dimal
I’m pretty happy with Continue, which is a free VS Code extension. You can use an OpenAI API key or local models. I don’t think it does autocomplete, just pair programming, but I’ve found AI autocomplete to be more of a distraction than anything, so it’s good for me.

👤 brunooliv
Is Claude 3 on a pay per use case or a fixed monthly fee? I’ve used it a bit with the free 5USD they have and it’s looking so so much better than gpt 4… the problem is I have the feeling it’s also extremely more expensive

👤 cyberbiosecure
for that answer of greatest validity you should e. g. check out "HumanEval" LLM benchmark on HuggingFace website. That is one of the best objective source of info on this issue. Currently Claude 3 is the best, far superior to other models. (85% correct code tasks done. ChatGPT 4 has 65% aprox. This is a giant difference (35% errors vs 15% , so Claude 3 is x2.5 better in terms of code quality losses)

👤 fragmede
in my limited testing, greptile, posted here recently, was really good for understanding an existing codebase, Reich is a large part of real-world programming.

👤 oksurewhynot
Supermaven slaps

👤 dankwizard
ChatGPT 4 but I'm bias (Paid to use)

👤 AlexanderNull
Neither Github Copilot nor GPT4 are worth your time. At best they partially guess the name of a function you're thinking about typing, at worst they give you almost a correct answer. I've been shocked by how close those models will get to almost understanding what you're attempting to do, while still fundamentally getting it wrong. Last month, after a while of realizing I was spending more time correcting suggestions than I was saving I stopped using them and will need to see some major improvements before I can feel comfortable using them again.