I chunked all the documents into 400-800 character chunks, vectorized them all, and put them in a vector database.
The results are pretty bad--the surfaced document chunks kind-of-but-not-really match up with the query.
I'm getting much better results from simple keyword searches (using meilisearch).
Am I doing something wrong? Do I need to use a fine-tuned model like BERT? Is this technology vastly overhyped?
My take is you might like keyword searches better for some queries and you might like embedding search for others.
The problems of: (1) How to combine keyword search and embedding search (you'd imagine you'd want a ranking function that handles both) and (2) How to handle chunks are both hard.
As for (2) you probably want to make the chunks as big as you practically can, you should be chunking on tokens instead of characters if you at all can.
With the chunks of course you don't get a score for the query-document relationship you get the query-chunk score instead which isn't quite the score you really want, aggregating all the chunk hits and properly chunking the data is an open problem to say the least.
* Is that the right chunk size? How much of a chunk might contain the relevant information? Is it better for your use case to chunk by sentence? I've done RAG with document chunks, sentences, and triplets (source -> relation -> target). How you chunk can have a big impact.
* One approach that I've seen work very well is (1) first, use keyword or entity search to limit results, then (2) use semantic similarity to the query to rank those results. This is how, for example, they do it at LitSense for sentences from scientific papers: https://www.ncbi.nlm.nih.gov/research/litsense/. Paper here: https://academic.oup.com/nar/article/47/W1/W594/5479473.
* You still need metadata. For example, if a user asks for something like "show me new information about X," the concept of "new" won't get embedded in the text. You'll need to convert that to some kind of date search. This is where doing RAG with something like OpenAI function calls can be great. It can see "new" and use that to pass a date to a date filter.
* I've found some embeddings can be frustrating because they conflate things that can even be opposites. For example, "increase" and "decrease" might show up as similar because they both get mapped into the space for "direction." This probably isn't an issue with better (I assume higher dimensional) embeddings, but it can be problematic with some embeddings.
* You might need specialized domain embeddings for a very specific domain. For example, law, finance, biology, and so forth. Certain words or concepts that are very specific to a domain might not be properly captured in a general embedding space. A "knockout" means something very different in sports, when talking about an attractive person, or in biology when it refers to genetic manipulation.