Several of the people involved in developing transformers at Google Brain and others from Open AI + DeepMind have now migrated to a new company called Adept (https://www.adept.ai/), which is aiming to push as close as possible to AGI for essentially any computer use (RPA taken to its natural conclusion).
The robotics space is now saturated with competitors racing against one another. Some of them, such as Miso Robotics, have already reached meaningful adoption.
I find it odd that people are so worried about nuclear devastation, climate change, etc when what we really should be concerned about is impending societal collapse. It feels like the upper bound on time to functional obsolescence of the vast majority of humans is 10-15 years.
Does anyone else feel scared about the future? Does anyone even care? I can barely wrap my head around the moral implications here in deciding whether or not to have children, let alone all the things about our society that are liable to descend even further into chaos.
But I was making the mistake of thinking getting to AGI was just a question of more compute.
I thought as computers get faster, we will eventually ramp up to AGI.
Like going from the brain of an ant, to a mouse, to a cat, to a chimpanzee, to a human.
But this is false, because we don’t have the slightest idea how creativity works, and creativity (conjecturing new knowledge) is the thing that sets people apart from all other things with brains.
People are not just an incrementally better version of a chimpanzee. They are qualitatively different due to their ability to create new knowledge.
Right now AI is kind of like China. They are really really good at copying things and improving things that already exist. In the case of China it’s manufacturing or tech. In the case of AI it’s training on massive sets of existing data.
But they have no ability to come up with new knowledge on their own. We don’t even understand how this works in our own brains. How do you come up with new ideas? I bet if you observe that process going on in yourself you will see that you have no clue.
We will figure this out at some point. Maybe tomorrow! But it might take hundreds or even thousands of years too. We can’t predict when or what type of new knowledge will be created.
Until then, AGI is not something we need to “worry” about at all.
I hope that helps.
GPT-3 is very far from the science fictional sentient AIs. It is merely a tool AI and using large language models such as GPT-3 will likely emerge as new kind of programming. It's a skill in itself, as GPT-3 can easily go off the rails if not kept in check by a human mind providing examples and instructions on what outputs it should generate. There's already a term for that: prompt engineering.
Society survived very massive changes - development of agriculture, industrialisation, emergence of information economy and was better off from these changes. If the Artificial General Intelligence eventually gets developed the world will adapt and benefit. Yes, things will change and some people will become unemployable, such as hunter-gatherers and farmers of the old times. But consider the big picture: the human society had adapted to some massive paradigm changes in its history and is likely to so in the future.
The more I work on AI/ML the more I think the fears around AGI have been misplaced. General intelligence almost by definition involves general motivations (rewards). Very general rewards inevitably make the AI / ML model hard to control, unpredictable and difficult communicate with (just like a human). If you find managing humans to be like herding sheep, try herding sheep, then try herding a non-biological general intelligence that perceives the world in a totally different way.
Therefore, what humans want AGI to be is largely a contradiction. We want something so clever and general it has the complexity and nuance of a human but so reliable and compliant as to be like a machine in a factory i.e very not human. These two things just don't go together.
Take the classic AGI that goes all out making paper clips and destroys everything. Are we seriously suggesting we have made something so nuanced and clever as to understand building regulations, supply chains, the art of the deal, the entire manufacturing process, human incentives, business, finance, HR, taxes and everything else that goes with exclusively running a paper clip manufacturing operation. Yet it is also so utterly single minded and blinkered that can't conceive of anything beyond more paper clips, even the obvious inevitable consequences of such a pursuit. In reality it really seems wants and skills are very much too sides of the same coin, you just don't learn about things you don't want and so the only way to learn about lots of things is to want lots of things, not just paper clips.
So I think maybe we could make AGI relatively soon and even now could have a good stab at lower level intelligence. The reality I see though is that we just actually don't want to because it wouldn't actually be very useful. What we want, is what are doing, extremely capable but ultimately intellectually dumb factory machines / cars that do as we wish on repeat.
I think AGI is inevitable, not in your time horizon at all however. But I think it would be the best thing to ever happen to humanity. So I don't see the doom and gloom.
I have no idea what "functional obsolescence" even means. So what if AGI can do everything better than a human. Why is that relevant?
I don't understand how the existence of a superior being is supposed to be terrifying.
https://duckduckgo.com/?q=repent+the+end+is+near+cartoon&t=o...
Today we have real (possibly bearded, possibly dressed in rags) individuals on the internet who "post" "Repent: The End is Near!" as millions look on and laugh.
"Does anyone even care?"
- about your post: No.
- about their lives: Yes.
Don't worry about the implications of having children: your main concern should be whether you will ever get laid!8-))
Finally, remember that no one has proven successful at forecasting anything of consequence more than ~5 years into the future.
"Artificial Intelligence"? I'd settle for "Artificial Idiocy" right now. At least it could be trained to make a decent hamburger.
But AGI is another beast. We don't understand causality in ML. We are no where near getting there.
I could foresee building some incredible human-computer interfaces, but I don't think we will get to a self-motivated thinking machine any time soon. We don't understand how our own consciences work. If we did, we would be able to better model human psychology and behavior, but we can't yet.
Some days I work until I feel like I'm going to collapse, but I'm doing it to provide my kids with a world-class education and the resources to compete with their peers. They daily demonstrate my impending obsolescence. Good!
I wonder how Cronus felt when he ate his kids.
We do not yet understand the principles of intelligence nor even a clear definition of it.
It's like trying to build a flying machine without understanding the principals of flight.
Unless there is a big break through in neuroscience. I wouldn't lose any sleep over it.
As well as having two sides, consumer as well as producer, it creates new possibilities.
Having said that, technology is the new capital, from a somewhat Marxist angle, and has been for ages, barriers to entry being what you make of them for whatever technology or product. It is worth making sure you're invested in whatever's needed to hedge your future liabilities. That investment may be financial, skills, side projects, land, family, other that applies to you.