So much of the world of software development is building variations of custom CRUD applications that take user input, store it and then present it back to the user in various ways allowing them to read, update or delete it.
On top of this is often a layer of other features such as workflow management, notifications etc.
How do you believe this type of software development will be impacted by the advancements in AI in general and LLMs in particular?
Cheers!
I doubt it will be business/stakeholder people interacting with the AI. It could potentially be business analysts but I doubt they'll want to. It would be an addition to their current job.
That leaves the software engineers. Maybe a lot of software engineers will turn into _solution_ engineers or _product_ engineers. Their job will be to create the solution/product even if they're not writing code.
What you've described is a bare bones CRUD app. The little I know about things like AI/LLM is you feed it text so it can learn. If the input is not good then the output is not going to be good either, doesn't matter what it does with the input.
We (as an industry) can't get feature requirements or business logic documented to be interpreted consistently by humans, who understand those problem domains in high fidelity, let alone some computers reading that text. If the translation of requirements to code isn't great, code to LLM to produce new code isn't going to cut it either.
Our industry jokes about all we do is CRUD apps but once an app is mature and beyond simple models, has integrations from a dozen APIs, has customers integrating via APIs, does reporting, needs to guarantee-ish transactions and most importantly is using derived sets of data for billing/invoicing it is much much more than a "CRUD app".
As the requirements become more complex, like having an email sent off after a Create or Update, some SQL stored procedure code needing to be triggered, at this stage because I havent seen enough of the dataset used for training to really know, but they could potentially know enough data to replace a large group of programmers.
What reduces my confidence in that thought, is things like Chap-GPT3 cant even get history right from a wiki page, and cant even code in a specific language.
Its generating pseudo code in some instances, so I think they have a way to go still.
Oh, and the other very popular request that we get is "Hey, we have an app built with
So basically a parking app that save you a spot, aka low hanging fruit now, sure the ai stuff will eat that market, but want something like Carrot Weather, ToDoist, etc you are going to need someone to build it.
Maybe the next generation of text AI models will do it :p
I honestly have no idea what next-generation LLM performance will look like, but it could _possibly_ be used to generate entire simple CRUD apps.
In the long term this is a bit like asking how ICBMs will affect fortress wall construction.
The husband told by the wife: "Buy a watermelon, if you see eggs, buy a dozen" before going out. So he went home with a dozen watermelons.