HACKER Q&A
📣 andrewstuart

Has anyone noticed how hard it is for ChatGPT to obey all instructions?


I can tell it over and over in multiple ways to output all the code and skip nothing and time and time again it skips things and puts in comments saying "fill this in".

Will a day come when AI knows how to do what it is told?


  👤 throwaway888abc Accepted Answer ✓
This is new thing, like last few weeks. My best guess is cost optimization to save prediction time (GPU).

Not happy about it either.


👤 minimaxir
Use the OpenAI Playground and system prompts.

👤 mattdm
ChatGPT, for all its amazingness, _never_ follows instructions. It just appears to, as it generates likely text. This is incredibly important to understand — it isn't a generalized AI.

Larger and more sophisticated models will do the party trick more convincingly... but actually following instructions will require a different approach.


👤 ilaksh
You didn't specify whether you used GPT-4 or 3.5-turbo.

It's true that it often inserts placeholders.

You can try using the functions feature (via the API) and/or fine-tuning .

If it's relatively simple code then you could also look into Codellama or WizardCoder which you may also fine-tune.

To be fair, the instruction following capabilities of GPT-4 are absolutely incredible relative to what was available a few years ago.

Don't worry, it will only be a few more years before AI is able to completely replace you. Then you will be wishing you could go back to the time when you had to write an extra prompt to fill in the TODOs.