HACKER Q&A
📣 johntiger1

How do I train a custom LLM/ChatGPT on my own documents?


Could've sworn there were 1 or 12 startups in the recent batch doing this...but can't find any off the top of my google search


  👤 gavinray Accepted Answer ✓
Do it yourself in ~20 lines of code with LlamaIndex.

You give it a directory containing documents and ask it to build an index and vector data embeddings over the documents

Then you can use this index with models like ChatGPT

Tutorial here shows the end to end process

https://gpt-index.readthedocs.io/en/latest/getting_started/s...


👤 usgroup
I think any such systems -- at the moment -- will try to avoid having to fine-tune the completion model, which leads to this kind of solutions:

1. Semantically index your documents.

2. Given a prompt, extract relevant paragraphs from your own documents.

3. Frame a context for the prompt from extracted paragraphs.

4. Ask ChatGPT to answer the prompt, mindful of the context.

That way ChatGPT can be used out-of-the-box.


👤 KRAKRISMOTT
Most don't train/fine-tune on your data, they stick it into a vector database and perform similarity search. The method is called Retrieval Augmented Generation.

This is the high level algorithm:

1) Sentence segmentation/text splitting. The data is indexed in disparate chunks so the user can look up the specific information they want.

2) The split sentences/text chunks are ran through a cheap LLM/specialized model, usually not state of the art but powerful/big enough to separate and associate the individual concepts in the latent space. Current models being davinci-003 and SentenceTransformers. The embeddings generation is usually the first step in an NLP deep learning model, so it's relatively cheap/lightweight. Essentially you take the first layer or two of a neural network and multiply the weight matrix against the input. This is the simplest type of embedding (see the original word2vec) algorithm. Transformer embeddings are a bit different but functionally they operate similarly.

3) The generated embedding vectors represent the input data in latent space, i.e. abstract representations. The most famous sentence in modern nature language processing being possibly King - Man + Woman = Queen.

4) The vector embeddings are stored somewhere, usually a database, but you can dump them in an excel file too if you want. You want to put it in a dictionary structure, where individual embedding -> original sentence chunk.

5) The user creates a query which is passed to the embeddings generator model and another vector embedding is generated (the query can be anything, so long it's natural language based since that's what the embeddings generator LLM was trained on). This query can also be created by an upstream LLM too, the specifics do not matter so long the sentence is mostly well-formed.

6) We obtain an answer by performing a similarity search (nearest neighbor in the vector latent space, using cosine/Euclidean distance/whatever metric). There are many approaches to do this, you can use kNN, you can use basic linear algebra and do n comparisons against all other vectors, or you can use a graph data structure (the currently preferred method for the fastest libraries).

7) You find the closest vector and the text chunk/sentences it represents and you take these sentences and the original query (the raw natural language text, not the generated embedding) and feed everything into a new LLM prompt. The LLM in this step is usually a state of the art chat model like GPT-4 or Llama 2, not the cheap model used for indexing and generating vectors. You pass a prompt like this:

Answer the following query: {original query text} with the given context: {text chunks, sentences}.

And that's it. Retrieval augmented generation has a fancy name and langchain's code feels opaque as hell like it was written by enterprise java people but the underlying algorithm is less than 30 lines of code with the standard ML and linear algebra libraries.


👤 onlypositive

👤 Nevin1901
I'm building a service called https://useftn.com which allows people to do exactly that. You just upload your dataset and we'll fine tune Llama 2-7B for you on that data. The model works with huggingface and all of the mainstream nlp frameworks.

It's in its really early stages right now (mainly just looking to learn/help people), so if you have your data in a specific format I'll be happy to code something up to make it work with your data.

(Also Disclaimer: I own this service)


👤 ndr_
So RAG (retrieval augmented generation) is all the rage, but there are problems with it that make it appear almost conceptually flawed - to the point where results are flat-out poor?

There‘s the paper about non-uniform attention („Lost in the Middle: How Language Models Use Long Contexts“) and some other paper mentioned that LLMs may de-focus on irrelevant retrieved content as soon as the ratio between relevant vs. irrelevant content becomes small.

So, what‘s the current best practice to actually embed your content within the model?


👤 victorbjorklund
It isnt very hard. The easy (and probably best alt unless you are big enough to justify training your own LLM everytime your documents change) is to use vector search to find the most relevant parts of your documents (I use openai embeeder and pgvector for postgres) and then you feed that text to an LLM (could be GTP4 or Llama) and asks it to answer the question using the text you provide.

👤 snide
If you want something easy just to try it out give https://xata.io/chatgpt a try. Load your docs into the db, and start asking questions. Xata has a pretty good free tier and you could likely get something running in about an hour.

That will at least get you to having some results quickly. I've found chatgpt is really more about the data you feed it, than anything else.

(disclosure: I work at Xata)


👤 llmllmllm
https://flowch.ai (our project) does this and is currently free to use. It doesn't train a custom model but it gets good results.

A simple way to do this is to upload your files (PDFs, Wod docs, virtually any type is supported), then generate reports using prompts based on those uploaded files. You can go from uploading to results in around a minute.

I've had good results uploading a bunch of documents, then running the same prompt on each of the documents with a few clicks using the "Flow Reports" feature.

We're working on lots of stuff on top of this, like scheduled reports (daily summaries / analysis / newsletters) and automated web scraping and data upload.

Here's an example using a BBC News article that I just uploaded to FlowChai (prompt: summarize in 200 words):

https://flowch.ai/shared/3c6d6ead-3ebc-4190-a143-ffeee81945a...


👤 tikkun
I have a list of tools here: https://llm-utils.org/List+of+tools+for+making+a+%22ChatGPT+...

There are indeed quite a few startups that do this.

Note that these are all 'retrieval-augmented generation' tools rather than fine-tuning tools.


👤 malshe
This thread is fantastic with a lot of great resources!

I have a related question but I don't want to start a new Ask HN for it. I have been using Machato app on Mac (https://news.ycombinator.com/item?id=35471091) for the last few months. This is a very nice app but with limited functionality. For example, it doesn't allow uploading PDF documents and asking questions on them as described in so many responses in this thread. I tried to search for a Mac app with this capability but my search came up empty. All the results I got are for web apps. Has anyone come across such an app? Paid/free doesn't matter.


👤 sharonzhou
Can do training here (for free on small LLMs). With a simple interface.

https://colab.research.google.com/drive/1QMeGzR9FnhNJJFmcHtm...

https://lamini-ai.github.io/

from llama import QuestionAnswerModel

model = QuestionAnswerModel()

model.load_question_answer_from_csv("data.csv")

model.train() # returns id to run inference & playground interface


👤 pud
I think most people don’t know this (I didn’t until recently) but you can upload documents to ChatGPT if you’re a paying member: https://chat.openai.com

Choose GPT4 and Code Interpreter (you have to turn it on in your Settings).

Then click the “plus” icon in the chat box.

Don’t upload anything sensitive.


👤 ssddanbrown
Danswer [1] was recently posted here [2] which can do this.

[1] https://github.com/danswer-ai/danswer [2] https://news.ycombinator.com/item?id=36667374


👤 theblazehen
Do you specifically need training, or would being able to reference your documents be good enough? You can have a look at projects such as langchain where they use embeddings in order to provide the LLM the relevant documents upon a user's query, which the LLM can then read and respond with

👤 jrpt
Use https://Docalysis.com to do that. There is an API too.

👤 arihantparsoya
You can try this startup: https://www.chatbase.co They have lot of traction.

👤 knbrlo
That’s a really good question, to go with that I wonder how are you thinking about interfacing with that once it’s been trained?

👤 kekeblom
I’ve not tried it, but you might want to look into https://www.stack-ai.com/.

👤 alexandr1us
You can upload PDFs to ChatGPT with Code Interpreter plugin enabled.