HACKER Q&A
📣 osigurdson

Is anyone considering cancelling their ChatGPT subscription?


I like the functionality and use it a lot but it has become so annoyingly unstable lately. Literally, every time I try to use it, it fails in one way or another. Perhaps my $20 / month isn't that interesting to them in the grand pursuit of AGI but I want to use the service, not merely fund research.


  👤 throwitaway222 Accepted Answer ✓
I don't know why anyone pays for 4 when 3.5 is already a billion times better than what we had 2 years ago. 3.5-turbo, at least from an API perspective is an extremely cost effective way for you to add more intelligent decision making to your applications and backend processes. We're going to use GPT 3.5-turbo to help us decide if a specific thing is probably "this" or "that" or "one of the following".... Super easy to use it that way than rolling our own crappy bag-of-words neural network that uses word2vec. Tiny bit slower, but worth it.

👤 cloudking
No, the hours it saves per month are worth a lot more than $20. It really depends on your use case, personally I find it superior to any other model I've tried for coding, debugging and troubleshooting server issues.

👤 kreijstal
Funny that you posted this, I canceled it an hour ago. If I need it, I'll use the gpt-4 api, why I cancelled it? It can't think anymore, it is arrogantly wrongly overconfident, it can't detect it's own mistakes anymore, and it has goldfish memory. For the record, it was brilliant and the best LLM ever, but now it feels like GPT2 levels of quality. It all started 2 weeks ago, when chatgpt got `silently` updated (https://chat.openai.com/share/512002b1-ceb3-48b5-9a29-d44b63...), In the beginning, it is decent, you can see towards the end there is certain gibberish (when the update happened). When creating a new chat, the quality seriously went down. I am looking for replacements, not sure where everyone went, what are they using nowadays?

👤 atleastoptimal
The fact that their now 1.5k token system prompt is forced into every response makes it not worth it, even though it's "unlimited". API makes a lot more sense for most purposes.

👤 JojoFatsani
Get an API key and an interface like MacGPT. I spend like $4 a month on tokens now.

👤 eveb
I paid for a subscription today and cancelled the subscription today and was refunded. It was freezing non stop, it kept adding color to my black and white line art and then told me it had no control over that, then it was telling me I ran a single prompt too many times. I said it kept getting it wrong that's why and it would do it again but it was slowing me down every time it was refusing my commands. The final straw was when it shut down and said I'd used too many requests and to come back near midnight the next day... so I'd waste a whole day of not being able to use it because I'd be in bed by midnight. If I'm paying, I want to able to use the thing for more than 20 images and I don't want to have to argue with the chatbot about what I'm doing. It should have no say or thoughts about how many times I've run a prompt.

👤 K0IN
we switched to librechat [0] it is a great app if you don't use gpt 4 often enough to hit the 20$ worth of tokens, also it supports plugins. If you use more than 20$ then just stay with gpt plus.

https://docs.librechat.ai/


👤 willnz
Cancelled this month.

I started getting too many responses which were clearly incorrect, I point out the correction to which the response was, "You are indeed correct [rephrases the answer correctly]"

Okay, great, but what about all the responses where I don't actually know the topic enough to know it's incorrect?


👤 fddrdplktrew
Isnt ChatGPT 4 available (for free) on Bing Chat anyways?

👤 DoingFedTime
Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was completely lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.

👤 DoingFedTime
Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was utterly lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.

👤 gtirloni
I'm considering switching to Gemini when the mobile app is available in my country and the web app gets better.

ChatGPT lately has its confidence really high and hallucinates in the middle of really basic technical stuff. It will say what a command should be for something really mundane like the `ip` command in Linux and then it sprinkles stuff that doesn't exist in the middle... you start to lose confidence.

I think LLMs are better than search engines but if I have to fact check everything, I'll switch to another LLM or go back to a search engine.

I don't know what has changed in ChatGPT but it's worse lately.


👤 remixer-dec
I use a fork of https://github.com/Krivich/GPT-Over-API, I edited it to support recent models and added cost estimation that keeps track of all the money spent on all requests. For most of the tasks ChatGPT3.5 is fine, but for more complex/recent-data related tasks, GPT-4 performs better. Why this over some fancy ui on the web? Well, this can be hosted locally and will not steal your OpenAI key, and it allows to set max.tokens and select history items on every request.

👤 motoxpro
I canceled mine and I now pay for Gemini. Only thing I really use it for is brainstorming and coding and it’s better for me at both those by a long shot.

👤 lmiller1990
How are people using GPT4 via the API? I signed up and loaded some money, but I still can’t use GPT for by the API. Only 3.5 turbo.

I still find ChatGPT 4 well worth the money. The coding is way better than 3.5. I wonder what their system prompt is, I don’t get as good results from the API (maybe I am doing it wrong).


👤 hugovie
To avoid both instability and strict limitations, you can utilize the ChatGPT API. By adding the API key into clients like MindMac[0], you will gain access to a pleasant UI with numerous additional features.

[0] https://mindmac.app


👤 longnguyen
Many of my customers canceled their ChatGPT subscription and switched 100% to API.

If you want to use GPTs or to generate images with Dall-E then ChatGPT Plus is a no brainer I guess.

But if your focus is on GPT-4 then I highly recommend to use an API instead.

There are multiple pros of using an API Key:

- You pay for your usages. I've been using API Key exclusively and most of the time, it cost me around $5-$10 a month

- Your data is not used for training. This is important for a privacy-minded user like me. Though you can disable this in ChatGPT (but you will lose chat history)

- No message limit. Though there are rate limits to prevent abuse but generally, you do not have the message limit like the ChatGPT

- You can choose previous GPT models or other open-source models via

Depends on the applications, you can also get these:

- Access to multiple AI services: OpenAI, Azure or OpenRouter

- Local LLMs via Ollama etc.

- Build custom AI workflows

- Voice search & text-to-speech etc.

- Deeper integrations with other apps & services

There are also a few cons:

- No GPTs support yet

- If you use Dall E a lot then ChatGPT plus is more affordable. Generating images using Dall E API can be quite expensive

Edit: Some tips when using an API Key:

- You pay for tokens used (basically how long your question & AI's answers). The price per chat message is not expensive, but usually you will need to send the whole conversation to OpenAI, which makes it expensive. Make sure to pick an app that allows you to limit the chat context.

- Don't use GPT-4 for tasks that doesn't require deeper reasoning capabilities. I find that GPT-3.5 turbo is still very good at simple tasks like: grammar fixes, improve your writing, translations ...

-You can even use local LLMs if your machine can run Ollama

- Use different system prompts for different tasks: for example, I have a special prompt for coding tasks, and a different system prompt for writing tasks. It usually give a much better result.

Shameless plug: I've been building a native ChatGPT app for Mac called BoltAI[0], give it a try

[0]: https://boltai.com


👤 StellaReed
I am using the free version. 3.5

👤 jasonjmcghee
Unstable due to what? Network error? Or the model itself providing bad results?

If it's about model output, I highly recommend custom GPTs. Taking 15 minutes to an hour to play around with a custom prompt to get it to work how you want is incredibly worth it.


👤 cableshaft
I cancelled it about four or five months ago. I've brought up ChatGPT 3.5 like three times since then. Definitely don't use it enough to be worth paying for it right now.

👤 youniverse
The past few weeks ChatGPT's response hangs about 80% of the time for any query I make and I need to refresh the page. Has anyone else had this experience?

👤 bilsbie
Yes but I can’t figure out why. It’s insanely useful and can do so many things but I’m just not using it lately.

It kind of reminds me of my quest vr.


👤 tmaly
I use the app and the API.

For me, the only thing missing from the web UI is the ability to search past chats.

Only the mobile app offers this feature.


👤 aristofun
Currently my employer pays for it and it saves me some time, so it’s fair and good enough.

But Id never pay it myself, it’s not worth it with current quality.


👤 lannisterstark
I just use the API. It's much more cost effective and doesn't have stupid limitations of 30 gpt4 queries per 4h or what not.

👤 aussieguy1234
I have a console based chat app I built for myself that calls GPT-4 via the API. It works pretty well and is not very expensive.

👤 jfoster
I just cancelled. The free version of Claude seems like it's a lot more capable.

👤 replwoacause
Yes, it’s worse in all ways from when I first signed up. Poorer quality responses and it’s dog slow.

👤 kesavvaranasi
I subscribed only because I wanted to create a custom GPT.

👤 drakonka
I am; it is extremely flaky. I use it mostly to make custom GPTs that read and analyze my own manuscripts. In the last few weeks I've had it refuse to read documents completely, read seemingly only the first pages, refuse to produce outlines or summaries, claim inability to parse what should be supported documents, and spew generic crap that is not at all based on the uploaded manuscript at all. It's starting to waste more of my time and energy than it saves.

👤 boredemployee
I used to love it, but it's getting on my nerves with all the politically correct BS.

I was trying to translate a little text and it refused because it cointaned a part that violated their terms. wth.


👤 ramyar
I never bought oh damn

👤 ulfw
I've cancelled long ago. Tried it for 2 months but was a hassle to have to use over VPN as OpenAI is too afraid of Hong Kong and banned us. So, yea no. Not going to pay money for that.

👤 0n0n0m0uz
works great for me

👤 f0e4c2f7
Nope. I use it more than Google now. Most amazing piece of software I've ever used by a country mile.