You give it a directory containing documents and ask it to build an index and vector data embeddings over the documents
Then you can use this index with models like ChatGPT
Tutorial here shows the end to end process
https://gpt-index.readthedocs.io/en/latest/getting_started/s...
1. Semantically index your documents.
2. Given a prompt, extract relevant paragraphs from your own documents.
3. Frame a context for the prompt from extracted paragraphs.
4. Ask ChatGPT to answer the prompt, mindful of the context.
That way ChatGPT can be used out-of-the-box.
This is the high level algorithm:
1) Sentence segmentation/text splitting. The data is indexed in disparate chunks so the user can look up the specific information they want.
2) The split sentences/text chunks are ran through a cheap LLM/specialized model, usually not state of the art but powerful/big enough to separate and associate the individual concepts in the latent space. Current models being davinci-003 and SentenceTransformers. The embeddings generation is usually the first step in an NLP deep learning model, so it's relatively cheap/lightweight. Essentially you take the first layer or two of a neural network and multiply the weight matrix against the input. This is the simplest type of embedding (see the original word2vec) algorithm. Transformer embeddings are a bit different but functionally they operate similarly.
3) The generated embedding vectors represent the input data in latent space, i.e. abstract representations. The most famous sentence in modern nature language processing being possibly King - Man + Woman = Queen.
4) The vector embeddings are stored somewhere, usually a database, but you can dump them in an excel file too if you want. You want to put it in a dictionary structure, where individual embedding -> original sentence chunk.
5) The user creates a query which is passed to the embeddings generator model and another vector embedding is generated (the query can be anything, so long it's natural language based since that's what the embeddings generator LLM was trained on). This query can also be created by an upstream LLM too, the specifics do not matter so long the sentence is mostly well-formed.
6) We obtain an answer by performing a similarity search (nearest neighbor in the vector latent space, using cosine/Euclidean distance/whatever metric). There are many approaches to do this, you can use kNN, you can use basic linear algebra and do n comparisons against all other vectors, or you can use a graph data structure (the currently preferred method for the fastest libraries).
7) You find the closest vector and the text chunk/sentences it represents and you take these sentences and the original query (the raw natural language text, not the generated embedding) and feed everything into a new LLM prompt. The LLM in this step is usually a state of the art chat model like GPT-4 or Llama 2, not the cheap model used for indexing and generating vectors. You pass a prompt like this:
Answer the following query: {original query text} with the given context: {text chunks, sentences}.
And that's it. Retrieval augmented generation has a fancy name and langchain's code feels opaque as hell like it was written by enterprise java people but the underlying algorithm is less than 30 lines of code with the standard ML and linear algebra libraries.
List of tools to bookmark: https://github.com/awesome-chatgpt/awesome-chatgpt
It's in its really early stages right now (mainly just looking to learn/help people), so if you have your data in a specific format I'll be happy to code something up to make it work with your data.
(Also Disclaimer: I own this service)
There‘s the paper about non-uniform attention („Lost in the Middle: How Language Models Use Long Contexts“) and some other paper mentioned that LLMs may de-focus on irrelevant retrieved content as soon as the ratio between relevant vs. irrelevant content becomes small.
So, what‘s the current best practice to actually embed your content within the model?
That will at least get you to having some results quickly. I've found chatgpt is really more about the data you feed it, than anything else.
(disclosure: I work at Xata)
A simple way to do this is to upload your files (PDFs, Wod docs, virtually any type is supported), then generate reports using prompts based on those uploaded files. You can go from uploading to results in around a minute.
I've had good results uploading a bunch of documents, then running the same prompt on each of the documents with a few clicks using the "Flow Reports" feature.
We're working on lots of stuff on top of this, like scheduled reports (daily summaries / analysis / newsletters) and automated web scraping and data upload.
Here's an example using a BBC News article that I just uploaded to FlowChai (prompt: summarize in 200 words):
https://flowch.ai/shared/3c6d6ead-3ebc-4190-a143-ffeee81945a...
There are indeed quite a few startups that do this.
Note that these are all 'retrieval-augmented generation' tools rather than fine-tuning tools.
I have a related question but I don't want to start a new Ask HN for it. I have been using Machato app on Mac (https://news.ycombinator.com/item?id=35471091) for the last few months. This is a very nice app but with limited functionality. For example, it doesn't allow uploading PDF documents and asking questions on them as described in so many responses in this thread. I tried to search for a Mac app with this capability but my search came up empty. All the results I got are for web apps. Has anyone come across such an app? Paid/free doesn't matter.
https://colab.research.google.com/drive/1QMeGzR9FnhNJJFmcHtm...
from llama import QuestionAnswerModel
model = QuestionAnswerModel()
model.load_question_answer_from_csv("data.csv")
model.train() # returns id to run inference & playground interface
Choose GPT4 and Code Interpreter (you have to turn it on in your Settings).
Then click the “plus” icon in the chat box.
Don’t upload anything sensitive.
[1] https://github.com/danswer-ai/danswer [2] https://news.ycombinator.com/item?id=36667374