These days I will occasionally use ChatGpt to ask a coding question (alongside Stack), but automatic AI autocomplete just gets in the way for me. I turn off all autocomplete, generally. The time it saves when it's right is overshadowed by the time it takes to manually fix its errors, especially when it takes me out of the flow.
I don't doubt that it can write higher quality code than I can when properly prompted, but doing so in autocomplete and with a high error rate was more disruptive to this human than helpful. YMMV.
It does an excellent job on Kotlin, especially tests. We find it's most fluent with Python, pretty good with JS and TS, but found several security mistakes on PHP, etc.
For boilerplate, accuracy seems a little higher than human. It wrote 1000 lines of mocks & test code in a day for me. But creatively, it's worse, there's some hallucination.
There's a marked increase in quality and accuracy in GPT-4. GPT-4 is high quality enough that it catches tons of mistakes in code reviews and can debug with a screenshot of a stacktrace. Copilot seems like it's a generation or two out of date.
My company pays for it, I don't think I'd pay for it personally since I use chatGPT when I need more advanced help and copilot is just autocomplete.
I think it's only given me full functions a handful of times (and they're usually quite wrong), most of the time it's just a method name or a super short (1-2 line) code snippet
Mostly using it as an autocomplete but it is quite good for writing repetitive code and logic. Also the new slash commands are pretty useful.
Even if they would have only 1% of all developers world-wide (for example), that’s still a lot of users.