HACKER Q&A
📣 sabrina_ramonov

Most successful example using LLMs in daily work/life?


Most successful example using LLMs in daily work/life?


  👤 mateo1 Accepted Answer ✓
I'm not a programmer, and when I write a program it's imperative that it's structured right and works predictably, because I have to answer for the numbers it produces. So LLMs have basically no use for me on that front.

I don't trust any LLM to summarize articles for me as it will be biased (one way or another) and it will miss the nuance of the language/tone of the article, if not outright make mistakes. That's another one off the table.

Although I don't use them much for this, I've found 2 things they're good at: -Coming up with "ideas" I wouldn't come up with -Summarizing hundreds (or thousands) of documents from a non-standard format (ie human readable reports, legal documents) that regular expressions wouldn't work with, and putting them into something like a table. But still, that's only when I care about searching or discovering info/patterns, not when I need a fully accurate "parser".

I'm really surprised on how useless LLMs turned out to be for my daily life to be honest. So far at least.


👤 Neff
Interpersonal Communication - My employer is a big fan of the Clifton StrengthFinders school of thought, and I have found that generative LLMs are really helpful in giving me other ways to phrase asks to people that I tend to find difficult to successfully communicate with.

I usually structure it like: --- My top 5 strengths in the Clifton StrengthFinders system are A,B,C,D,E and I am trying to effectively communicate with someone who's top five strengths are R,T,∑,√,S.

I need help taking the following request and reframing it in a way that will be positively received by my coworker and make them feel like I am not being insensitive or overly flowery.

The way I would phrase the request is .

Please ask any questions that would help provide more insight to my coworker, other details that could resonate with them, or additional background that will help the translated request be received positively. ---

While the output is usually too verbose, it gives me a better reframing of my request and has resulted in less pushback when I need to get people to focus on unexpected or different priorities.


👤 porkloin
I've used gpt4 pretty extensively while learning Japanese to have a written conversation partner, ask clarification around grammar, or to translate native content for me and explain it. I've validated a lot of the answers it comes up with, and although it hallucinates occasionally, it doesn't do so on the same points consistently. I'm going to encounter a lot of my vocabulary and grammar hundreds or thousands of times, so even if it's incorrect 1% of the time, it's not a huge problem.

As with most LLM use cases, it's best when used to augment an existing workflow that reinforces it. In my case, I already have an whole setup where I'm using anki flash cards for vocabulary and grammar study, some curated human-written resources for learning grammar, and native language content for reading and listening immersion. GPT is really helpful to be able to quickly get a sentence level translation and each word translated, and full descriptions of the grammar points at work in the sentence. It saves me a lot of time over working with a dictionary and juggling grammar resources, vocab, etc. I can ask it follow-up questions, and even switch straight into trying to use the grammar/vocab in an example sentence of my own right on the spot. I seriously think I'd be way worse off if I didn't have access to an LLM throughout the process.


👤 kardos
It often replaces google search. Instead of sifting through heaps of SEO junk and accompanying trackers,ads,popups,widgets, etc and going through a search-term refinement cycle to eventually find something, the LLM immediately produces a clean (ad-free, nag-free, dark-pattern-free, etc) result. It generally needs to be checked for correctness and has limitations in terms of recentness. But avoiding the low-signal sea of crap that google returns is a breath of fresh air.

👤 mdp2021
I have been thinking for a long time that we do not have (to the best of my knowledge) a good transcript formatter, and that Transformers should be part of the solution - a huge wealth of material is on YouTube, and its subtitles do not use punctuation.

I can confirm that requesting LLMs to format bare subtitles adding punctuation (from commas to paragraphs, with quote marks, dashes, colons etc.) can work very well.

It may seem a minor feature, but it is something that information consumers easily benefit from (when you need to process material in video format you can download the subtitles, add formatting with an automation, then efficiently skim, or study, or process transcripts and video together...).


👤 ChicagoDave
I’ve been designing and developing a parser-based interactive fiction (text adventure) authoring system using .NET Core/C#.

I started with ChatGPT and am now using Claude Opus 3.

For background, I’ve been in tech for 40 years from developer to architect to director.

Pairing with an LLM has allowed me to iteratively learn and design code significantly faster than I could otherwise. And I say “design” code because that’s the key difference. I prompt the LLM for help with logic and capabilities and it emits code. I approve the bits I like and iterate on things that are either wrong or not what I expected.

I have many times sped up the process of going down rabbit holes to test ideas when normally this would wipe out hours of wasted time.

And LLMs are simply fantastic as learning assistants (not as a teacher). You can pick up a topic like data structures and an LLM can speed up your understanding of the elements and types of data structures.

And best of all, it’s always polite.


👤 MountainMan1312
I'm autistic and sometimes I just cannot put my brain stuff into words. On a few occasions, I've just haphazardly shoved a list of thoughts into ChatGPT and said "make this sound not dumb" and it does just good enough. Usually I'll copy the general structure of the sentence/paragraph and change it around until it sounds like I wrote it.

I mostly do that when I need to make a complete document, because I struggle with startings and endings. I like the middle.


👤 macintux
I’ve used it for simple code suggestions when working in a language I’m unfamiliar with, or testing some new (to me) corner of Python.

I used it to help me think through what I’d need for color film development in my darkroom.

Basically if I already have some idea of what I need, I trust it to help guide me. I can evaluate its output sufficiently well.

If I’m learning something entirely new, where it doesn’t matter a great deal whether I get it right but I can test the output, it’s pretty useful too.


👤 semireg
I’m a firm believer that good enough means avoiding catastrophe. Baking bread? Making beer? Caulking a window? Just avoid these common mistakes the outcome will be good enough.

I’ve gotten in the habit of asking LLMs to coach me to avoid the things that can go wrong.


👤 paintboard3
LLMs have massively increased the number of creative projects that I start. It makes the jumping off point for a vague idea much easier to stomach.

Coming from a non-technical arts field, but always being interested in the technical side of things, LLMs have led to me realizing functional versions of software projects that I've never had the time to learn myself, allowing me to act more like a project manager than a software developer, but exposing me to so much code that I've also become more comfortable making my own functions and edits to the code. I also use LLMs frequently to build shortcuts or write me commands to make common processes quicker in my workflow.

From a creative POV, I frequently use LLMs along with models like whisper to transcribe and make sense of long ramblings, turning a 20 minute voice memo from a car ride into a functional plan and organized beginnings of a project such as a screenplay, essay, movie, etc.

Whenever I get off a documentary shoot, I also run all my footage through whisper to get the timecode transcripts, as well as highlights from those transcripts that are deemed as notable by the LLM. This gives me a good jumping off point to start crafting the narrative.

Right now I see LLMs as a really good tool to help kick off and trudge through projects that might be daunting to take on solo otherwise, but they are massively underpowered at actually "finishing" anything. As a result, I have a ton of projects in-progress that I wouldn't have started otherwise, but probably the same % ratio of finished to unfinished projects. In that sense, LLMs have increased the population of my ideas-graveyard, but put me in a better position to pick the ideas back up if I renew my interest in any of them.


👤 vocram
As a non native English speaker, it’s very helpful to use a LLM to validate if a sentence I wrote is clear, correct, and if there is a more idiomatic way to express the same thing - btw, I did not do it with what I wrote here :-)

👤 panza
Copilot. I suspect a lot of us will (or already do) use it at some level, even if it's just autocompleting logging statements, writing boiler plate/comments, suggesting improvements etc.

👤 nicklecompte
I tried using GPT-4 as a better way to search papers - it can be very annoying when you know the gist of a result but not the authors or enough details about the methodology for Google. GPT-4 was pretty good at figuring out what citation I wanted given a vague description.

However, the confabulation/hallucination rate seemed highly subject-dependent: AI/ML citations were quite robust, but cognitive science was so bad that it wasn't worth using. Eventually I went back to the Old Ways. But there are a good number of academics that use it as an alternative to Google Scholar.


👤 shreyarajpal
I get really great value in using it for brainstorming. So a common workflow for me is write out a project plan and figure out issues, or familiarize myself with an engineering area really quickly.

👤 gmuslera
Learning. It is not passive anymore. You have a conversation, you can ask why, if something different would work, how something would be done without going though a lot of documentation, criticism on your proposed solutions, you have all the time you want, go at your own time schedule, ask about ideas you got while walking, etc.

It may make learning more personal, your own path, and you can ask if you are missing something important doing it that way.

And it works for most topics, for most ages, at your own pace. We are entering a Diamond Age.


👤 Aromasin
I live in Europe so most of my customers don't have English as a first language. Any questions are generally in pretty broken English. Honestly, reading through and making sense of what they're trying to say is a real mental challenge at times. I use LLMs to reformat and structure their message/ticket, which I paste into my notes. The accuracy is pretty good - certainly as good as me, although I do proof-read. I then ask it to pull out the pertinent information and bullet point it. I can turn those bullets into action items for me to investigate or respond to. It saves me about 15 minutes on each case, meaning I save maybe an hour every day in translating.

The next is for writing up beauracratic nonsense my organisation asks me to do. Monthly status reports, bandwidth allocation, deal-win summaries and the like. I write down what I've done at the end of each day, so I just feed that into an LLM and ask it to summarise the bulk bullet points into prose. It saves me god knows how many hour refactoring documents. I modify the prose when it's done, to match my personal style and storytelling methodology, but it gets me the barebones draft which is the most time consuming part.

I love LLMs personally, and am embracing them primarily as a scribe and editor.


👤 vundercind
Sub question: anyone using local or at least self-hosted AI systems productively? What kind of hardware does that take? What’s the rough cost? Do you refine the model on custom data? What does that part look like? (much higher hardware requirements, I expect?) Which open source projects are aiding your efforts?

All I’ve done is try one of those pre-packaged image generation models on my M1 Air back when the first of those appeared.


👤 tech_ken
It saves me a lot of keystrokes as a coding copilot. Pretty good at detecting my usual patterns, and most of the time it can auto-complete a line with either something correct or something very close to correct (usually just a few small tweaks required). I write a lot of SQL and it's especially good at autocompleting big join clauses, which my carpals greatly appreciate.

👤 jamesponddotco
I use it for coding, checking grammar, improve the UX of command-line applications, learning new programming languages, and a bunch of other things. My wife recently decided to go back to university to study translation, and Claude has been a great tool for her studies too.

Honestly, I can't remember my life before LLMs and that is a bit scary, but my productivity and overall self-esteem improved quite a bit since I started using them. Heck, I don't think I'd ever get into Rust if wasn't for the learning plan I got Claude to write for me.

You can find my prompts in the llm-prompts[1] repository. Any new use case I come up with ends up there―today I used it to name a photography project, for example, so the prompt will end up in there after dinner.

[1]: https://sr.ht/~jamesponddotco/llm-prompts/


👤 pgryko
I use gpt4 for summarizing git diffs into commits (llama3 via groq also works nicey).

Those then get used as part of my end of day report.

Example code: https://www.piotrgryko.com/posts/git-conventional-commit-gpt...


👤 mdp2021
Among the topmost cases of usefulness of LLMs you should place the possibility of obtaining information (or pointers to information) that search engines will not return as they "do not understand the question", or produce excessive noise in the results...

👤 chasd00
I use it to help write proposals sometimes. I can prompt it to compare/constract two technology providers and that gets me started writing. It's never a perfect fit but it helps get the creative/sales juices flowing.

I also use it for searches when i know the specific documentation i'm looking for has to compete with SEO spam. It's also pretty good at explaining code, i've pasted in some snippets of code from languages with snytax i'm not familiar with and ask it to explain what's happening and it does an ok job.

i also like to use it for recipes like "create a recipe for chicken and rice that feeds 4", "make it spicier" etc.


👤 npteljes
I love using it to refresh my knowledge, to help me remember a technical term, or have it provide me an overview of a topic, comparing two alternatives for a function, things like this. I also used it to generate boilerplate code, especially in domains I was not familiar with. The code wasn't working "out of the box", but it was still helpful as a starting template, as I have the most trouble laying the foundations.

👤 hi_hi
A general rule of thumb I follow is "Do I need to output fact or fiction".

For fiction it's great. Facts you need to be much more careful with and ensure you validate.


👤 vinhnx
I'm building my own AI Chatbot, with multi LLM models to switch and choose from. I also add enhanced multi-modal capability, like we can casually ask the AI to generate an image or just casual chat. It helps me to improve my learn about LLMs landscape and help me with daily work/like. You can try it on my GitHub

[1] https://github.com/vinhnx/vt.ai


👤 dr_october
I use it to unlock Russian books (literature, history) and articles (mainly old Soviet chess magazines). ChatGPT4 produces very nice first-pass translations.

👤 HayBale
Text correction or generating a full sentences from scraps.

Like I write a super messy barely coherent paragraph and ask LLM to streamline the text and make it easy to understand while avoiding the LLMs grandiose language. Obviously it needs some corrections but it's way faster than normal.

Also just to shorten a longer text or even reformat the text accordingly to some direction.Like to convert daily notes to proper zettelkasten ones.


👤 fxtentacle
None, so far. I had high hopes for copilot and JetBrains Assistant, but both of them are way more verbose than my usual coding style. Maybe that's just me, but I have my set of libraries that I use in C++ or Go and the result is that I rarely need to write much boilerplate. But I guess for that LLMs would work great, if only I could trust them as much as battle-tested libraries.

👤 ammar_x
I have Raycast extensions for GPT and Claude models. Whenever I have a question, the most powerful LLMs in the world are two key strokes away.

This way is easier than going to the browser then ChatGPT tab for example then creating a new chat.

I found myself using LLMs more and getting more out of them because of this frictionless interaction. They've become more of actual "helpful assistants."


👤 jftuga
I'm trying it out to give me the correct artist name and song name for any given YouTube title. The titles of the music that I happen to like do not seem to usually be in a nicely, regular format. Llama3 does an admirable job. My plan is to pair this with yt-dlp and a mp3 tagger.

👤 collinvandyck76
I wrote a terminal app using bubbletea that talks to openai and saves conversations to a sqlitedb. i use it all the time to figure out what threads to pull on for a problem with which i'm unfamiliar. it has proven to be one of the biggest returns on effort i've ever invested in.

👤 kilroy123
For me, it's when companies build a bot for their platform or app.

Which has been trained on all this data, documentation, GitHub issues, Jira, Zendesk issues, Slack messages, etc. It's a sort of customer service bot that can help you code.

That's been the real magic that I've experienced.


👤 altilunium
My English skills are still at an NP-complete level (I find it hard to compose my own sentences, but for me it's easy to verify whether they are good enough or not). So, I have been repetitively begging the LLMs to fix my grammar while communicating online.

👤 hamilyon2
All sorts of low-brow copy-and-paste search-and-replace work.

Like: create curl request from this tcpdump exchange. Or, take this slightly corrupted sql query from logs and print it properly.

Too amorphous and infrequent to properly automate, too labour intensive to do


👤 enceladus06
ChatGPT is great for making my emails more “human” sounding. I’ve used it for coding help, electronics help, teaching me math.

👤 camjw
GitHub copilot and nothing else comes close tbh.

👤 kkfx
Well... I curiously ask who am I, and get variable answers from "I'm a scientist" (I am not), "I'm a politician" (I'm not) and so on, so I conclude they might evolve to some interesting pattern finders in the future but so far they are damn expensive toys.

A less useful, but still useful sometimes, to produce SMALL SNIPPET of code in some language I do not know, I can correct them to something useful sometimes, so might be a little interesting use for a very limited specific task.

In a more broad ML terms:

- OCR might became much better witch while it's a nonsense in 2024 it's still a thing because many still live like the 1954;

- automatic alerts on video surveillance and so on might be a nice, though not super-trustable things;

- better image manipulation tools (not only to produce deepfake porn) might became a thing with a limited and not often working but still very nice.


👤 masteruvpuppetz
David Bombal interviewed a cloaked man who's using LLMs to get superpowers https://www.youtube.com/watch?v=vF-MQmVxnCs

👤 freitzkriesler2
Make a wordy email more concise, otherwise they're mostly toys.