“Thanks for this conversation! I've reached my limit, will you hit “New topic,” please?”
Same dialog on ChatGPT and it can keep the conversation going. Does this have to do with token counts?
Each prompt-response is independent, and it doesn't have a real understanding of the historical context of the conversation. This means that when you talk to the chatbot, the entire conversation history needs to be included in the prompt to give it context.
For instance, if your first prompt is "Who is the current sitting president of the United States?" and the response is "Joe Biden," and you then ask a follow-up prompt like "How tall is he?", GPT-3 won't understand what you're referring to unless the prior conversation is included in the prompt, like so:
```
User: Who is the current sitting president of the United States?
AI: Joe Biden
User: How tall is he?
AI:
```
By doing this, GPT-3 can use the context of the previous conversation to generate an appropriate response. However, as the conversation history gets longer, GPT-3's ability to maintain context weakens, and it may begin to provide irrelevant or nonsensical responses.
There is a soft limit of around 2000 tokens, which translates to approximately 1500 words. When the conversation history becomes too long, GPT-3 will summarize it in a way that preserves the important information. For example, "Who is the current sitting president of the United States?" might become "Who's the president of USA" to save space.
For most users, including the entire conversation history in the prompt is feasible for up to six prompts before context begins to degrade. After that point, it may be necessary to restate previous information or use other methods to maintain context.
While I believe that ChatGPT and Bing likely have more advanced methods for retaining conversation history, based on the recent limitations implemented, it seems that the general principle of maintaining context still holds true.