HACKER Q&A
📣 wsgeorge

Can LLMs “remember” previous prompts?


This might be a stupid question, I'm completely new to this. Is it possible for LLMs to incorporate human feedback by update their models in real time during a conversation session?

If not, why not?

EDIT:

Put another way,

1. If initial training runs some training loop 10 trillion times to modify a model by a lot

2. And if fine-tuning runs some training loop 10 thousand times to modify that pretrained model by a bit more

3. Can LLMs be architectured to take human feedback as input to run some training loop 10 times to nudge the actual model some more?


  👤 MagicMoonlight Accepted Answer ✓
They use previous prompts in that conversation as part of their “context” but the actual model doesn’t update

👤 geepytee
LLMs can 'search' thru previous prompts. If they don't fit on GPT3's token limit, I've seen apps use OpenAI embeddings model for this. It works to the point that it feels like it 'remembers.' Granted that even embeddings will have a cap.

👤 kir-gadjello
No, a typical LLM is a pure function of its input, if you (and not the LLM hosting company) control all of input context, and if your sampler uses pseudorandom number generator.

But you could create an LLM for which it wouldn't be the case.