In my view, the dismissive naysayers about AI seem to be making not very sound arguments and focusing solely on GPT. Some are often nitpicking about understanding or accuracy. They Seem to be enticed by the intelligent sounding contrarian argument. Missing the bigger picture because oh woopsie GPT sometimes returns logically incoherent statements or mistakes facts, it’s not like it’s capable of generating very complex content and changing it to conform to very fine-grained prompts. Not as if it contains access to more words and ideas than any human. Whether it is accurate or coherent, it can generate things at a mind-blowing amount of complexity, and it’s going to get better at coherence, accuracy and precision. In my mind, its versatility would be hampered if it had very strict accuracy controls.
While I’m no expert, my understanding is that it’s a bit misleading to say it is just “predicting the next word” and “understand nothing”. While this might be technically true, this prediction making has become increasingly complex with transformers, embeddings and attentional heads. Also, I wouldn’t discount the potential of a really, really good prediction. And how much of this “understand” is tied up in a human notion of an attentional self behind the calculations, a watcher.
I watch it correct previous mistakes without even pointing specifically to the mistake, simply saying that didn’t work or giving it a broad compiler error. It can translate between languages, style writing in response to paragraph long specifications. Ya, it’s not accurate sometimes and takes many shots to find what you want, but What are the naysayers expecting? Will they keep nitpicking until we have some AGI that runs on a GPU the size of Montana, that reads your thoughts to get prompt, talks to god to check accuracy?The more I use ChatGPT the more I think, very soon we will be confronted with the serious question of whether humans are really all that much more than predictive generative models with some fancy quirks on top like motivational systems and perceptions, feeling of selfhood. What’s stopping these from being modeled one day?
With AI art, it will be more similar to a director-actor relationship. AI may generate all the assets and the models, but someone has to make sure they fit into a bigger scene. It could do a perfect scene, but an artist has to decide whether to make the tone more somber, what (AI generated) music fit with this scene and so on. We might also see the average artist doing animations and scenes instead of just one image.
Programming will be more or less the same. AI could make current coding obsolete the same way Java made machine code obsolete. But we still have more jobs and longer hours, what gives?
I think it'll just be Gustafson's Law in that we use the increased capacity to solve bigger problems until we hit a wall.
I strongly suspect prompt engineering will be a job for quite some time.
---
As for the existential question, we're all scared apes trying to get by in a world in which you can ride a chain in the sky at almost the speed of sound, yet complain the wifi is too slow.
There's no reason to suspect an uppity network fast matrix multiplies that gets treated to the same thing would do any any better.