I like llms, or lin other words, I like that we are getting better at something.
However, just want to ask; what was the initial problem llms were trying to solve, what problem did they solve so far?
Do you have any examples in your life or work, which you can clearly say “we were not able to do this before llms, but now we can” or “we were able to do it, but not good enough, it was causing us some issues, now it is a lot better”
If the answer yes; second question would be like, does the total cost of those problem at least equal or exceeding the amount of investment on these models?
Thanks in advance
It’s almost like reliving the late 1990s with far more ads, more vanilla websites, and worse search engine quality.
I finally do now.
I have done so much in the last 3 months.
1. Cleaned up my personal website and blogs 2. Built a couple of learning tools for myself - https://rfc.stonecharioteer.com and https://github.com/stonecharioteer/goforgo 3. Setup OpenWRT and Adguard+Unbound at home, with a non-trivial failover with multiple WANs.
It's helping heal my burnout, something that crippled me for years and kept me from my side projects. It showed in my career too, because I've stagnated since 2021. I'm trying to improve now, and I'm relying on Claude Code and ChatGPT (albeit on legacy models) to do so. 3.
this motivated us to get him a real therapist and have a long conversation about the dangers of humanizing ai
2. Giving some structure to my opensource project ideas. I had a good time getting over my analysis-paralysis while writing them down.
Because it's the wrong question!
It's not that LLMs solve entire classes of life/work problems. Instead, they take some life/work task (coding, ideas generation, learning about new topics, personal reflection) and make them x% easier, y% faster, z% better.
Claude and Gemini have been very useful in helping me come up to speed on a code base written in Go (a language I have used before but not for many years). Figuring out where the business logic lies, how the dependency injection is done, how the tests are written, what overall design pattern is being etc.
Of course, I could have done all this without LLMs but it would have taken several weeks/months longer. Letting the LLM handle the boilerplate and framework jargon lets me focus on the business logic and the design patterns, and helps me contribute much faster. But LLMs do often make mistakes so it's not like I blindly trust the output. They don't replace your colleagues in terms of being the ultimate source of truth. But it has speeded up the learning process, no doubt.
Also, when writing code I provide the style guide to the LLM as context and have it review the code.
Now, before you jump on me saying that AI is wrong, this is true. But at the same time, I no longer can be 100% sure that whatever SEO optimized website I land at provides accurate information. If I need solid facts, I usually double check AI with various other sources. For queries like "best keyboard for software engineers", I'd rather get a table with pros/cons from AI rather than landing on whatever affiliate related website is promoted on Google. LLM gives me a good starting point to either dig deeper into particular products, or query further to find more suggestions.
Same for coding. I used to Google "how to split a string in ruby" and land on flame war, or 19 years old, StackOverflow question. Now I can get an updated answer from whatever LLM you prefer with a reference to the official documentation. It works for simple queries, as well as code snippets.
Lastly, I use LLMs to plan trips or gift ideas. I'd just throw in my preferences, and let LLM build a rough plan, from which I can iterate further, or start doing my own research.
A common argument I hear about AI is "I could just write it faster myself", well I know CompSci and general info about a lot of software things but it would take hours of getting up to speed on areas I'm not an expert in to be productive. I can just delegate that to AI and get mostly correct outputs, this is okay with me and faster than what I could do.
I think the cost is going to catch up with the AI companies running the models (not the companies building products that call AI APIs) and that is when the bubble will burst. They will need to keep increasing costs and at some threshold, fewer and fewer developers in an organization will have licenses because it may become unaffordable.