I don't trust any LLM to summarize articles for me as it will be biased (one way or another) and it will miss the nuance of the language/tone of the article, if not outright make mistakes. That's another one off the table.
Although I don't use them much for this, I've found 2 things they're good at: -Coming up with "ideas" I wouldn't come up with -Summarizing hundreds (or thousands) of documents from a non-standard format (ie human readable reports, legal documents) that regular expressions wouldn't work with, and putting them into something like a table. But still, that's only when I care about searching or discovering info/patterns, not when I need a fully accurate "parser".
I'm really surprised on how useless LLMs turned out to be for my daily life to be honest. So far at least.
I usually structure it like: --- My top 5 strengths in the Clifton StrengthFinders system are A,B,C,D,E and I am trying to effectively communicate with someone who's top five strengths are R,T,∑,√,S.
I need help taking the following request and reframing it in a way that will be positively received by my coworker and make them feel like I am not being insensitive or overly flowery.
The way I would phrase the request is Please ask any questions that would help provide more insight to my coworker, other details that could resonate with them, or additional background that will help the translated request be received positively.
--- While the output is usually too verbose, it gives me a better reframing of my request and has resulted in less pushback when I need to get people to focus on unexpected or different priorities.
As with most LLM use cases, it's best when used to augment an existing workflow that reinforces it. In my case, I already have an whole setup where I'm using anki flash cards for vocabulary and grammar study, some curated human-written resources for learning grammar, and native language content for reading and listening immersion. GPT is really helpful to be able to quickly get a sentence level translation and each word translated, and full descriptions of the grammar points at work in the sentence. It saves me a lot of time over working with a dictionary and juggling grammar resources, vocab, etc. I can ask it follow-up questions, and even switch straight into trying to use the grammar/vocab in an example sentence of my own right on the spot. I seriously think I'd be way worse off if I didn't have access to an LLM throughout the process.
I can confirm that requesting LLMs to format bare subtitles adding punctuation (from commas to paragraphs, with quote marks, dashes, colons etc.) can work very well.
It may seem a minor feature, but it is something that information consumers easily benefit from (when you need to process material in video format you can download the subtitles, add formatting with an automation, then efficiently skim, or study, or process transcripts and video together...).
I started with ChatGPT and am now using Claude Opus 3.
For background, I’ve been in tech for 40 years from developer to architect to director.
Pairing with an LLM has allowed me to iteratively learn and design code significantly faster than I could otherwise. And I say “design” code because that’s the key difference. I prompt the LLM for help with logic and capabilities and it emits code. I approve the bits I like and iterate on things that are either wrong or not what I expected.
I have many times sped up the process of going down rabbit holes to test ideas when normally this would wipe out hours of wasted time.
And LLMs are simply fantastic as learning assistants (not as a teacher). You can pick up a topic like data structures and an LLM can speed up your understanding of the elements and types of data structures.
And best of all, it’s always polite.
I mostly do that when I need to make a complete document, because I struggle with startings and endings. I like the middle.
I used it to help me think through what I’d need for color film development in my darkroom.
Basically if I already have some idea of what I need, I trust it to help guide me. I can evaluate its output sufficiently well.
If I’m learning something entirely new, where it doesn’t matter a great deal whether I get it right but I can test the output, it’s pretty useful too.
I’ve gotten in the habit of asking LLMs to coach me to avoid the things that can go wrong.
Coming from a non-technical arts field, but always being interested in the technical side of things, LLMs have led to me realizing functional versions of software projects that I've never had the time to learn myself, allowing me to act more like a project manager than a software developer, but exposing me to so much code that I've also become more comfortable making my own functions and edits to the code. I also use LLMs frequently to build shortcuts or write me commands to make common processes quicker in my workflow.
From a creative POV, I frequently use LLMs along with models like whisper to transcribe and make sense of long ramblings, turning a 20 minute voice memo from a car ride into a functional plan and organized beginnings of a project such as a screenplay, essay, movie, etc.
Whenever I get off a documentary shoot, I also run all my footage through whisper to get the timecode transcripts, as well as highlights from those transcripts that are deemed as notable by the LLM. This gives me a good jumping off point to start crafting the narrative.
Right now I see LLMs as a really good tool to help kick off and trudge through projects that might be daunting to take on solo otherwise, but they are massively underpowered at actually "finishing" anything. As a result, I have a ton of projects in-progress that I wouldn't have started otherwise, but probably the same % ratio of finished to unfinished projects. In that sense, LLMs have increased the population of my ideas-graveyard, but put me in a better position to pick the ideas back up if I renew my interest in any of them.
However, the confabulation/hallucination rate seemed highly subject-dependent: AI/ML citations were quite robust, but cognitive science was so bad that it wasn't worth using. Eventually I went back to the Old Ways. But there are a good number of academics that use it as an alternative to Google Scholar.
It may make learning more personal, your own path, and you can ask if you are missing something important doing it that way.
And it works for most topics, for most ages, at your own pace. We are entering a Diamond Age.
The next is for writing up beauracratic nonsense my organisation asks me to do. Monthly status reports, bandwidth allocation, deal-win summaries and the like. I write down what I've done at the end of each day, so I just feed that into an LLM and ask it to summarise the bulk bullet points into prose. It saves me god knows how many hour refactoring documents. I modify the prose when it's done, to match my personal style and storytelling methodology, but it gets me the barebones draft which is the most time consuming part.
I love LLMs personally, and am embracing them primarily as a scribe and editor.
All I’ve done is try one of those pre-packaged image generation models on my M1 Air back when the first of those appeared.
Honestly, I can't remember my life before LLMs and that is a bit scary, but my productivity and overall self-esteem improved quite a bit since I started using them. Heck, I don't think I'd ever get into Rust if wasn't for the learning plan I got Claude to write for me.
You can find my prompts in the llm-prompts[1] repository. Any new use case I come up with ends up there―today I used it to name a photography project, for example, so the prompt will end up in there after dinner.
Those then get used as part of my end of day report.
Example code: https://www.piotrgryko.com/posts/git-conventional-commit-gpt...
I also use it for searches when i know the specific documentation i'm looking for has to compete with SEO spam. It's also pretty good at explaining code, i've pasted in some snippets of code from languages with snytax i'm not familiar with and ask it to explain what's happening and it does an ok job.
i also like to use it for recipes like "create a recipe for chicken and rice that feeds 4", "make it spicier" etc.
For fiction it's great. Facts you need to be much more careful with and ensure you validate.
Like I write a super messy barely coherent paragraph and ask LLM to streamline the text and make it easy to understand while avoiding the LLMs grandiose language. Obviously it needs some corrections but it's way faster than normal.
Also just to shorten a longer text or even reformat the text accordingly to some direction.Like to convert daily notes to proper zettelkasten ones.
This way is easier than going to the browser then ChatGPT tab for example then creating a new chat.
I found myself using LLMs more and getting more out of them because of this frictionless interaction. They've become more of actual "helpful assistants."
Which has been trained on all this data, documentation, GitHub issues, Jira, Zendesk issues, Slack messages, etc. It's a sort of customer service bot that can help you code.
That's been the real magic that I've experienced.
Like: create curl request from this tcpdump exchange. Or, take this slightly corrupted sql query from logs and print it properly.
Too amorphous and infrequent to properly automate, too labour intensive to do
A less useful, but still useful sometimes, to produce SMALL SNIPPET of code in some language I do not know, I can correct them to something useful sometimes, so might be a little interesting use for a very limited specific task.
In a more broad ML terms:
- OCR might became much better witch while it's a nonsense in 2024 it's still a thing because many still live like the 1954;
- automatic alerts on video surveillance and so on might be a nice, though not super-trustable things;
- better image manipulation tools (not only to produce deepfake porn) might became a thing with a limited and not often working but still very nice.