As far as I can see, people are experiencing very different results than I am from ChatGPT, and I am trying to figure out if the issue is something I have done as operator.
For me, I find that it is nearly 100% of the time experiencing issues where it will ignore an instruction, forget a key detail, or so on.
I do NOT find it particularly reliable and become profoundly annoyed when it outright ignores something in its prompt. I never had to worry about a BASIC interpreter just seemingly 'choosing' to ignore "10 PRINT 'HELLO WORLD.'".
Yet, on the other hand, I hear about people who have automated their jobs, routinely get near-perfect scripts, and so on, and believe that we who are experiencing problems are nuts -- that it's something WE must be doing.
I think there is either a conceptual issue at fault (e.g., how people looked at cryptocurrency differently) or the issue is with me - either my expectations or how I am using the product.
I find that it's routine that I have to try its answers, advise it of the error that inevitably comes to task, report it back, have it discover its own fault, and then go through that cycle about five or six times, to either find that the errors are happening in a loop and there seems no sign of progression towards a working script, or that it eventually works.
If I had a human employee providing the answers it does with its associated error rate, I would've fired it long ago, and, indeed, I'm considering not renewing my monthly ChatGPT Plus subscription right now.
But I know I am far from perfect. It is possible that perhaps I have done something to set it up to fail, especially in these (relatively) early days. Perhaps I should see if I can somehow reset myself to come from scratch like a new user, or purge custom instructions, or so on.
There is this pervasive conceptual difference I'm seeing between ChatGPT users (mostly on Reddit and here). There are people who insist that ChatGPT (even 4, which I'm speaking of) is becoming "dumber" and is markedly "dumber" than it used to be, and there are those who claim that Problem Exists Between Keyboard And Chair. I am kind of wondering if HN as a community is able to figure out truly what's going on here, rather than just assume that the problem is a biological difference of opinion.
I would value the input from the community as to what causes this marked difference of opinion between people as to their experiences with ChatGPT's reliability.
It’s honestly not that dissimilar to conspiracy theory culture in this respect, and it can be hard to have a nuanced and thoughtful conversation around things we don’t really when there are a lot of people invested in one particular outcome.
Currently ChatGPT at most has the brain capability of a 4 year old. And who has read and mostly remembers the encyclopaedia.
Keeping in mind this, balancing your expectation, and with some effort you can instruct, lead, proof read, correct and get great results from it.
Perhaps with time it will come of age and be to the T to your instructions. Who knows? Maybe even in our lifetimes?