HACKER Q&A
📣 LewisDavidson

Bypassing GPT-4 8k tokens limit


Okay, I know it's not possible to bypass the 8k token limit. I don't have access to the 32k model yet.

I have transcripts that are typically around 15000 tokens in size. I want to split this text into different topics. The problem is the current limit to GPT-4.

The obvious approach would be to split the text into chunks and then send to the API. However, GPT-4 won't have the context of the other chunks to accurately identify topics inside the text. For example, the chunk could separate the text in the middle of a topic. I can't think of a way that can programatically chunk the text without breaking up topics by mistake.

Does anyone know of a way around this or have a better approach? Or is this just the reality of GPT-4 at the moment.


  👤 ndr_ Accepted Answer ✓
Check out llama_index at https://github.com/jerryjliu/llama_index. What it does: it creates an index over your data using OpenAI embeddings vectors, using the OpenAI Ada model. When querying, it compiles as much context out of this index as fits into GPT, based on similarity to your prompt. Be cautious however: when I experimented with this, GPT-4 support with it‘s larger context size was not there yet. I have landed https://github.com/hwchase17/langchain/pull/1778, but I never wound up submitting another, yet similar patch (to llama_index? Don‘t remember). Make sure that the GPT-4 context is really fully used, and not some smaller size is assumed. Also, ensure that GPT-4 is used as the LLM in the first place: the defaults used to be the older models.

👤 vlerenc
What I have automated is actually to find good chunks (in this order: heading, paragraph, sentence, word, non-alphanumeric, mid-word), feed it to gpt-35 (cheap, fast) and give it the previous chunk summaries up front telling it to leverage the context, but not include them in the next chunk summary. Finally, when I have all the chunk summaries, I feed them to gpt-4 for aggregation ("smarter"), telling it not to shorten the overall amount of text. Works decently well.

👤 mansueli
Did you take a look at https://github.com/yasyf/compress-gpt?

👤 sharemywin
you could use ntlk to summarize the text before you send it GPT-4.

I have a script that uses NLTK to do this. It needs cleaned up but it could be a starting point.

https://github.com/gnuconcepts/Text_summary


👤 aClicheName
You could try to create some sort of compression instead. By that, I mean, instead of appending the chunks, what if you get a summary of that rough area (along with an indicated variable holding onto that character position)? You could then use the summary in a temporary storage as a sort of “index” that roughly outlines that area. Appending these together, along with mixing things around, you could create something that “roughly knows” about the document, and knows where to go looking for further, in-depth info. - Henry

👤 paxo
You can use semantic search, then feed that into the LLM.

There are many solutions already, look into Haystack by deepset, or if you are up for a challenge, you could make something in Langchain.


👤 ofermend
Longer sequence length in transformers is an active area of research (see e.g the great work from the Flash-attention team - https://github.com/HazyResearch/flash-attention), and I'm sure will improve things dramatically very soon.

👤 noobcoder
If you dont really have budget constraint. You could try compressing the transcript by sending it few sentences at once. You would need some sort of dependency parsing to check for the splits. Ask GPT to compress it. Keep doing that till you are under 8k tokens. Finally input the overall compressed transcript.

👤 MH15
Interesting question and quality answers, but I was under the impression that specific technical Q and A posts aren't for HN. This seems like a question better suited for StackOverflow or a forum dedicated to AI engineering.

👤 noman-land
I haven't tried this yet but I've been thinking experimenting with feeding the summary of the first arbitrary chunk in with the next chunk. Then feed the summary of the second chunk in with the third chunk, etc.

👤 f0e4c2f7
If you have access to bing someone figured out you can enable a longer token length by editing the HTML as the limit was set browser side rather than server side. Not sure if this has been patched yet

👤 fakedang
Divide your data into smaller chunks, then use some kind of initial vector similarity check to choose only the relevant bits.

👤 aiddun
Bing chat creative mode is the 32k model

👤 summarity
Use semantic compression (plenty of papers on that now). Works for both language and code.