HACKER Q&A
📣 homar

Have you fined-tuned any GPT models?


What does you fine-tuned model do and how much did it cost to tune?


  👤 taylorfinley Accepted Answer ✓
I have! I’ve been trying to get ChatGPT to fluently speak rot13–a surprisingly challenging goal thus far. I haven’t finished writing up the fine tuning with Babbage post yet, but you can read the first few entries in the series at https://rambling.ai

👤 thwayunion
Yes. A bunch of my personal writing, and a lot of transcribed audio (recordings of my own conversations). About $1K worth of cloud compute, but on my home rig so that's just a number.

👤 skim_milk
I am using GPT-3 with babbage as the base model for the purpose of fixing common transcript errors for a professor's lectures and decorating the text. With about 100 lectures having their transcripts manually corrected and annotated by hand, the cost to train on this data comes out to around $5 per training session. Using the trained model to fix the rest of the 1000 lectures will cost around $300.

👤 muzani
Yes. Basically for https://random-character.com where I use it to render some fixed output. One of the tedious bits was rephrasing the same thing in different ways, as well as inserting pronouns. I have another one similar to Quillbot, but to write to a very specific tone.

It's on curie, with data in the hundreds of lines. The cost was trivial. Less than a dollar to train and several cents to run.


👤 moomoo11
Just to make sure - you can't tune davinci-003 right?

👤 r3trohack3r
A follow up question - are there any good (written) learning resources on fine-tuning and embedding?