That comes down to why I hire developers in the first place: To share my responsibilities with people I can trust.
I don't hire them to write code or to close tickets. The act of programming, I consider an exercise that helps them understand the problems we solve and the logic of our solutions. I'm always excited when I have a well specified ticket I can hand to a new hire to learn the ropes. So the kind of thing I can imagine Devin can pull off at some point, that'd actually be detrimental to the kinds of teams I build.
I don't think I represent the majority of why people hire developers though, so I guess tools like that may well have a big impact on the industry. Nobody can predict that though.
Uncertainty sucks, but it's how things are. I find the best way to deal with uncertainty is to become better at adapting to unforeseen circumstances. Programmers have quite a bit of experience with that, for what it's worth.
If by these days you mean the two days after it came out several weeks ago, and by everyone you mean a handful of people on social media, sure.
I've used Cody and Copilot and it just gets in the way because I know exactly what I need to write and neither really helped me.
Future hypothetical AI coding assistants that don't exist yet? While I won't say it's philosophically impossible that they'll move beyond extreme autocomplete with security holes, I'll say it's not up to me to disprove someone else's hypothetical. Show me the thing.
Essentially I now just architect and review. Cursor has good context, so if that gets extended to the way Devin operates I think this could go pretty far.
In the short-term (5-10 years, I cant see them autonomously producing products), it will need an experienced programmer to interpret and use the output effectively.
An implication of this is, in the short-term, developers become even more valuable. You still need them, and these tools will make the developer significantly more productive.
I was reading Melanie Mitchell's book 'Artificial Intelligence: A guide for thinking humans' recently (which I'd recommend). She has this chapter on computer-vision. And as an example, she shows a photograph of a guy in military clothing, wearing a backpack, in what looks like an airport, and he's embracing a dog. She makes an insightful point, that our interpretation of this photograph relies a lot on living-in-the-world experience (soldier returning from service, being met by his family dog). And the only way for AI to come close to our interpretation of this, is maybe to have it live in the world, which is obviously not such an easy thing to achieve. Maybe there's an analogy there with software development, to develop software for people, there's a lot of real-world interaction and understanding required.
In terms of autonomously producing products, I see these tools as they are now a bit like software wizards, or a website that Wordpress will create for you. You get a 'product' up-and-running very quickly, and it looks initially fantastic. But when you want to refine details of it, this is where you get into trouble. AI has an advantage over old-fashioned wizards, in that you can interact with it after the initial run, and refine it that way. But I'm not sure this is so easy, to have that fine-grained control you have with code. This is where I see the challenge being, to develop tools to talk to it, and refine the product sufficiently.
It is a tool for building software. You still need to know software development to use the tool.
You might not need to actually write code in the future - just like very few write Assembly today.
But you still need to know and understand system requirements, systems architectures, integrations, distribution, deployment, maintenance, etc.
Software Engineering is more than just coding.
I think we're in different spaces, because I barely hear anything about it. That said, I think the LLM replacing jobs train was blown out of proportion. I heard so much about it replacing developers, but I've seen time and time again it output code with subtle bugs (I'd argue worse than obvious bugs) and no be able to operate with more than just a little bit of context.
I think we're in a Pareto distribution situation right now. The majority of getting an LLM to write code was pretty quick to do. To get it to do anything a moderate dev can do will take decades.
I've seen it multiple times over the last 15ish months where I'm reviewing code and I spot a subtle bug in an htaccess file or a bash script that doesn't make any sense. The PR comment follow up is then "oh I got it from ChatGPT". I think these tools become assistants to a human developer who can guide them. That use case is already available and seems to be pretty decent for a lot of folks. Full replacement is so far away that I don't have a single thought about it.
i contend that given that the universe of computing (ie the capabilities of the processor) is finite, all software is engaged in essentially the same activities (write to memory here, a file over there, etc) such that the difference between any two could be reduced to a matter of interpretation. if so, any automated agent capable of assigning some meaning to these activities should be able to produce a sound program, in whatever computer language, even one very proprietary to the agent.
If we are talking about a specific use case black box to automate the work, then it might be possible up to a certain limit.
because when we are talking about training Neural networks, we are looking for the best numbers. The so-called "emergent abilities" which say that increasing model size makes it smart can be true but what's the probability of getting most of the parameters to their correct values? There are billions of them.
(Total)Replacement? I highly doubt that.
Although AI is advancing rapidly, if a person learns to adapt in such situations by broadening their learning scope, one can smoothly navigate through such hypes.
The current LLMs are not perfect but I recommend anyone to try it out - it really speeds up development, great for creating boilerplate or when trying to use a language you have little knowledge in.
Architecting, writing requirements, debugging nasty issues and optimizing tricky problems will remain valuable.
Like all the other AI codebots it's a tool that can potentially optimize a developers workflow. In the same way that a nail gun optimizes a carpenter's workflow. But sometimes the carpenter might just use a hammer.
> This riff derives from a recent "AI Programmer" story that's making people in my corner of the nerdiverse sit up and talk, at a time when hot new AI happenings have become mundane.
> ...
> It is yet another prompt for me to take the lowkey counterfactual bet against the AI wave, in favour of good old flesh and blood humans, and our chaotic, messy systems.
So - do FSD coding assistants pose a threat to developers? Sure, just like any tool using GPT3+ class engines do, we are in revolutionary times.
But the revolution here is the engines now available thanks to OpenAI and successors, not the wrappers like Devin and others.
If you're not concerned already, if you're not pondering the future already you've missed the point by seeing it only in the likes of Devin.
Personally, I think there is both risk and opportunity - it really depends on what sort of mindset you have as to whether you need to feel threatened.
The real question is, what will be the rate of progress from this point forward?
That's L5, and it may well be the future, but as it stands today, I believe L3-L4 is a more productive target for a coding agent. Before building full autonomy, we first need the infrastructure to precisely guide the models and iterate efficiently on our interactions with them.
Once that foundation is in place, it will then be possible build more and more robust layers of autonomy on top. But crucially, the developer will always be able to a drop a few layers down and take over the controls.