- We are not fixing bugs faster
- We are not developing features faster
- We haven't seen an explosion of new projects
- We haven't seen an explosion of vulnerabilities being discovered
Maybe I am missing something but to me everything looks the same (except for an increasing amount of useless customer service chatbots and garbage LLM generated books on Amazon)
Edit: Unfortunately this submission was demoted for some reason but thanks for all the comments.
LLMs will certainly lower the entry barriers for new programmers, and might also create a new solopreneur economy because of it. Now non-technical people with ideas can start prototyping and raise money, but would soon need engineers to grow the product.
For him, the norm is still to redline a document on paper, and have his secretary add those changes to the original digital document and have that sent over to the opposing team for the same treatment.
I don't have strong opinions about LLMs' coding ability (though compared to the other comments so far I am more on the "LLMs are pretty good at creating software from natural language descriptions" side) but even assuming that LLMs can give programmers a 50x productivity increase, I'd assume it would take 10-50 years for industry and processes to evolve to take advantage of that increase.
Jury's still out. It will take time until we have enough post mortems to tell if it is doing the job and how it's affecting things.
I do agree that if it was so good, we'd see practical applications ib more meaningful ways than just anecdotal tricks or lots of low quality content.
Github hosts only 20% public repositories. Perhaps open source developers are less likely to have Github Copilot paid out of their own pocket?
Why do you expect "an explosion of new projects" with perhaps 20% of increased productivity? What percentage of open source developers are using LLMs for increased productivity when working on open source? If it's merely 20%, we'd see a 4% increase, something that's hardly noticeable.
It unlocks a small amount of extra productivity, but not that much. Yet still enough to be worth it.
My position is that they are useful but not massively useful, yet.
I got 4o to give me a 33 line, relatively simple and understandable bidirectional BFS Kotlin function for this Leetcode problem which Perplexity (non-Pro) and GPT4 could solve, but not as well as 4o - https://leetcode.com/problems/word-ladder
Of course, even though these are Leetcode hard level problems, they are well-defined and relatively self-contained. I work at a Fortune 100 company and 99% of the time I can pound out the CRUD I do in my sleep - the difficulties I encounter are distractions, the CI server having some problem, the ticket/story I am working out not being fully specified and the PM is MIA that day, all teams are working on the feature at the same time and I need to find out what feature flags to have set and which test headers have been agreed on, the PM has asked me to work on something but some of what he says does not make sense in context so I have to ask for clarification etc. Then there's the meta-game of knowing what to prioritize, with one important component being what will make my manager happy so I get a good yearly review, and what I need to prioritize may differ from what my PM says to prioritize, or even more complexly, what my manager says to prioritize, but doesn't really mean.
I believe AI will be useful in Game Dev. AI voice acting, AI face generation. This way all the NPCs will be unique. Possibly AI layout generation.
I don't think using AI to generate script is great use case. It can be used to generate ideas. But still we need human creativity to make great games.
Try a couple percent. More if you type slowly (magic autocomplete). More if you're doing something where you need to search q&a fora a lot.
I don't personally know anyone trying to use more fancy tools like agents or ide-integrated helpers. They're not perfect by any means and you actually need to learn how to use them well, but the difference is massive. I've definitely saved some hours when developing smaller scope tools. It's not a time save that would drastically change my total productivity, but... it exists and it's going to increase in the future. And it requires upfront investment into the tooling and learning that few people seem to be interested in.
But even given current issues, how can you tell there hasn't been an improvement? How would you be able to tell across all the open source in the world?
That said, LLMs in code editor come with some kind of "hyperactivity" which I find really unpleasant. They’re too "in your face", make the code move a lot and sometimes make it a bit harder to focus than without LLM. They can also be extremely frustrating and result in productivity loss, for example when they generate code that’s slightly wrong and you need to take some time to fix it. It’s harder than just writing the code.
The business doesn’t have clarity on what they are trying to achieve. Or they don’t have clarity on what’s important, and constantly change priority (and both of these can cause the most talented engineer to spin their wheels).
LLMs can help gain clarity, the same way a coach, consultant, or therapist can help you work through a scenario. But it’s only as effective as the work you’re willing to put into that endeavor.
So it comes down to:
* Nothing has changed regarding the nature of human work ethic
* Most people don’t want to be a programmer. The idea that ”everyone’s a programmer now” is no different than saying ”everyone’s a carpenter now” because power tools exist. Most people don’t want to do that kind of work and are happy to pay someone else to do it.
If a business sees a 15% productivity boost coming, especially with no easy plan in place to to utilize it fully for equivalent profit, someone near the top is already thinking that quick cuts could be an immediate 15% increase in reported profits for next quarter (in a 1:1 scenario).
I'm being a bit simplistic, but I think the general idea of business maximizing profits over output stands (or easy short-term thinking over more difficult long-term planning).
- It’s still very early. LLMs have only been publicly available for 2 years, copilots a little less than that.
- It’s mostly anchored on cold starts ie I’m creating something from scratch. Leveraging LLMs in existing and mature codebases is definitely going to pick up.
- The majority of devs aren’t really using these tools or using them to their full ability. It takes a lot of fiddling to understand the limits and strengths, but when you do, you basically stop writing code and write more prose.
I will be surprised if in ten years even a quarter of your keyboard inputs will be towards code directly vs directing your friendly coding robot.
But I still love programming and will mostly continue to do so when it's for fun, which is most of my OSS. For me it's like saying "why do you do woodworking when you can outsource it to some Chinese shop?" when it defeats the point.
Patience.
Here are some significant productivity gains I get from Mistral/Phind/ChatGPT/office-internal-llm daily.
- throw a messy shell script and ask it to refactor it(works 80% of the time)
- put a sample xml/json/yaml and ask it to generate the class/struct (code generation)
- ask questions and it gives immediate response with example more well suited to my need (previously took time to go into SO/Reddit/SE etc and scroll through several posts, docs or even waste time reading blogspams )
- ask questions about specific topic and get immediate response and citations(this is inhouse trained model) instead of fighting with broken search or ocean of messy documents in Confluence/Notion/Gitlab Pages and what not
- rubber duck when brainstorming a problem(it can sometimes lead to interesting outcomes)
- prepare a bash script to do something and then I simply modify/correct/refine it to fit my needs
- questions about trivial stuff
- generate boiler plates
- generate a throw away project to try something fast
- convert from one language to another(need to work with different teams using different languages such as TS/Java/C++/Scala/Python/Shell/Rust/Erlang etc)
- write a polite email(or response to) which I can copy paste and send when I am too occupied with something else
- documentation of specific feature of something which would take a lot of digging in the original docs
- generate a pure self-contained html/css prototype to send to our UI/UX team to give them an idea of particular concept
- summarize large block of text into bullet forms(useful for presentations)
- get summaries of popular books(because chatgpt has indeed trained on a lot of them somehow!)
- translate a text to another language(works well when it does but still needs some corrections)
Most of these activities save me a lot of time which would previously need some big time investments.
Source:
https://www.theregister.com/2024/09/09/gartner_synmposium_ai...
* we are not developing features faster, but we have time for asking questions we had no time to ask before. More ambitious architectures and designs.
* we are fixing bugs faster and we are producing less bugs (because of better designs).
* not everybody is happy about less bugs.
* we discover more vulnerabilities. Again, not everybody is happy about that, they just want new features, not new knowledge of vulnerabilities and technical debt.
1. is the web becoming more [accessible](https://abilitynet.org.uk/news-blogs/inaccessible-websites-k... http://useragentman.com/wcag-wishlist/)?
2. are the web pages getting [faster](https://www.nngroup.com/articles/the-need-for-speed/) and lighter?
3. is it righting wrongs about existing non-performant [code](https://www.webperf.tips/tip/cached-js-misconceptions/)?
4. is it encouraging [smaller](https://dyf-tfh.github.io/)?
5. is it promoting historical [insights](https://qntm.org/clean)?
6. is it popping [bubbles](https://www.youtube.com/watch?v=Y7YAXUWG820)?
7. is it encouraging the correct interpretations of actual [innovators](https://mamund.site44.com/articles/objects-v-messages/index....)?
8. is it minimizing or eliminating [traps](https://www.gnu.org/philosophy/javascript-trap.html)? (also see the W3C's Web Sustainability Guideline's on javascript fallbacks)
9. is it avoiding the "[wars](https://tanzu.vmware.com/content/blog/framework-wars-now-all...)"?
10. is it shedding the object-form? https://dreamsongs.com/ObjectsHaveFailedNarrative.html