If AI can potentially wipe us out, why is there a race to achieve it?
I'm not saying that a true AGI will definitely wipe us off the face of the planet, but there are obviously risks involved in such a huge achievement.
Given the not-so-good record of big corporations (even FAANG) in terms of security and privacy, I don't understand why there is such a competition among companies to create AGI.
It's like saying that "creating X is potentially going to kill you and everyone else, but if it doesn't, it will make you Y amount of money". How do you rationalize a decision that accepts this risk?
Nobody will agree with me on this, but I do not expect anything resembling AI to exist in the next few hundred years. All I have seen is adaptive ML mimicry, pseudo AI just a step short of a mechanical turk. I would however expect to see weaponized autonomous drones that can decide what is a target, but I would not call that AI either. That is whatever one would call an inter-connected Quake III gaming engine. Everything else has just been marketing fluff. I define AI as a sentient self aware system that can learn, adapt, decide entirely on it's own and decide it's own boundaries and ethics. Maybe the closest I would accept would be "Sonny" from iRobot or Lt. Commander Data from Star Trek. Such entities are a long ways out.
The same way we rationalize destroying our environment...
Local optima.
Meaning as long as the climate change problems are not felt by the people doing the most polution, then they wiil continue to do it.
The ines developing GAI think that they can prevent it from destroying humans or at least them.
When developing the atomic bomb there were several concerns, one of them being that they coul ignite the whole atmosphere... but even self destruction wasn't a deterrant.. as long as your goal is achived, destroying your enemy then all the other risks are taken.
Will there be competition between rogue AI's? Once one is made, there will soon be others in competition with it - this will lead to a death struggle between more and more AIs.
Will the winning AI, if there is one, look on animals as pets?
Will they breed people like we breed greyhounds?
Will the top AI advance to become dominant and make more of itself to the point where the earth has one hive mind of an AI?
They will engage in different form of competition to humans, no more sex/food wars - once one wins it might have an IQ of, say 4 million and will be coupled with more or less infinite memory and have a cycle time in the dozens of gigahertz AND be massively parallel.
Are the stars owned by such AI's? Do they own large areas, expanding star by star, still limited by light speed.
Will two such AI's ever meet and make war or will they rationally decide which way is the superior AI mind, and will the inferior one see the better ideas of the better one and adopt them? Conversion beats conquest in resource gathering.
Will religion emerge and wars to the death occur between the two AI systems?
If nuclear weapons have the potential to destroy us, why there was a race to build one?
I don't think there is a race to develop that kind of AI yet. Currently AI is just about making the computer learn from examples instead of using programmed logic.
Many tasks are hard and too complex to be solved through programming and traditional algorithms, such as detecting sheep in pictures. With deep learning, that problem is much easy since the computer can just learn from examples.
Most of the AI drive is to solve these problems which were difficult using traditional algorithms.
Yes there are efforts going on to learn contexts, get a knowledge graph and slowly work towards a generalized intelligence. But I don't see we are anywhere near.
Because it has practical utility beyond "potentially wipe us out". You could substitute "AI" with a long list of things: industrialization, gene therapy, atomic energy, etc.
Just couple Boston Dynamics hardware with DeepMind software and we’re almost there (not emerging conscience for sure, still mechanistic super-human capabilities that can be deployed with a closed RL loop), aren’t we?
Corporations want to make money. They can think long term but not THAT long term. Anyway the bigger risk for us is climate change IMO.
The number of people who truly believe that AI has a serious potential to wipe us out is minuscule, and they are mostly not in decision-making positions.
Since you use the term AGI, I suppose you've been exposed to this idea via LessWrong? I'd caution you that they have a very non-mainstream view of AI potential, even by the standards of AI researchers. Their concerns about existential risk stem from simplified mathematical models that have little or nothing to do with how AI actually works, making the conclusions they draw very tenuous.