HACKER Q&A
📣 davidajackson

Avoiding irrelevant or undesirable model context in RAG


Considering a RAG prompt say of the format:

Answer question using context .

Say is a contradictory statement to what the model was trained on.

There are certain situations where one wants the llm to use model knowledge, and some where one does not. Is there any formal research in this area?


  👤 Ephil012 Accepted Answer ✓
At my company, we developed an open source library to measure if the context the model received is accurate or not. While not exactly the same as what you're asking, you could in theory use it to measure when an LLM deviates from the context to tweak the LLM to not always use the provided context.

Shameless plug for the library: https://github.com/TonicAI/tvalmetrics