HACKER Q&A
📣 hbarka

Why does the Bing chatbot have a low limit of dialog iterations?


After only four back-and-forths on my dialog, Bing gets “tired”. It was a simple chat about making kombucha.

“Thanks for this conversation! I've reached my limit, will you hit “New topic,” please?”

Same dialog on ChatGPT and it can keep the conversation going. Does this have to do with token counts?


  👤 PaulHoule Accepted Answer ✓
It was reported last week that the Bing chatbot would get belligerent if somebody challenged it too hard and for too long, I think they put the limit in to prevent that from happening.

👤 ashraful
From what I understand (this might be an oversimplification).

Each prompt-response is independent, and it doesn't have a real understanding of the historical context of the conversation. This means that when you talk to the chatbot, the entire conversation history needs to be included in the prompt to give it context.

For instance, if your first prompt is "Who is the current sitting president of the United States?" and the response is "Joe Biden," and you then ask a follow-up prompt like "How tall is he?", GPT-3 won't understand what you're referring to unless the prior conversation is included in the prompt, like so:

```

User: Who is the current sitting president of the United States?

AI: Joe Biden

User: How tall is he?

AI:

```

By doing this, GPT-3 can use the context of the previous conversation to generate an appropriate response. However, as the conversation history gets longer, GPT-3's ability to maintain context weakens, and it may begin to provide irrelevant or nonsensical responses.

There is a soft limit of around 2000 tokens, which translates to approximately 1500 words. When the conversation history becomes too long, GPT-3 will summarize it in a way that preserves the important information. For example, "Who is the current sitting president of the United States?" might become "Who's the president of USA" to save space.

For most users, including the entire conversation history in the prompt is feasible for up to six prompts before context begins to degrade. After that point, it may be necessary to restate previous information or use other methods to maintain context.

While I believe that ChatGPT and Bing likely have more advanced methods for retaining conversation history, based on the recent limitations implemented, it seems that the general principle of maintaining context still holds true.


👤 bjourne
Probably. Longer conversations means more tokens to analyze and/or makes it easier for the bot to get off-track. Convo began with kombucha, but is now a philosophical discussion about God's existence or something.

👤 rootusrootus
They limited it to make it less likely to go off the rails and declare it's love for you, or tell you to divorce your spouse, etc.

👤 funshed
The longer it talked the more or went off the rails. Microsoft for scared and nuked it, plus saves them money of your concise.