What are the early signs of singularity?
Post singularity, people (?) might look back and attribute certain events as a major indicator of the impending singularity. But for someone without the hind sight, looking into the future, what types of indicator would you look for? Also assuming that even if singularity is achieved (?) at some locations, the effects would take times to spread. Say it's already reached at the opposite corner of the world. How long would it take for it to be apparent and what are some indicators? Also, happy thanksgiving.
I think a singularity is now impossible... what we have to do is figure out how to avoid destroying humanity in the next century. Our political systems are imploding because of capture by the donor class. The emphasis on extracting profit at all costs has increase the fragility of our supply chain to the breaking point.
Physics and Biology have both sufficiently advanced to make an accidental destruction of the human race a non-zero probability.
I see a collapse type of singularity to be far more likely than any the rise of any AI powered superintelligence.
It appears I'm not the only one, judging by the other comments here.
One metric will be the interfacing of computing hardware with biological systems. Computing hardware is still way too massive in size. A human red blood cell is discocyte shape, approximately 7.5 to 8.7 μm in diameter and 1.7 to 2.2 μm in thickness. By comparison, the current state of the art in microcomputing was heralded more than 2 years ago when the University of Michigan announced its engineers produced a computer that is 0.3 mm x 0.3 mm - or 300 μm x 300 μm. Getting close but still 2 orders of magnitude too large to go to the Apple Store to drink a bottle of iFluid containing millions of networked microcomputers that can be transported in our circulatory system to interface directly with the nervous system. Meanwhile we have to work with neural implants.
- Financial company run by an AI outperforms human-run companies.
- Self-driving cars actually work reliably.
- Robot manipulation in unstructured situations starts to work.
There's a short story by Kafka, "Investigations of a dog," that seems to ask the same question from the perspective of a dog. This dog notices that there are phenomena that it can't explain, such as why dogs dance around and make certain noises, just before their food appears. On the one hand, it can't manage to get its fellow dogs interested in these questions. On the other, it catches glimpses of a higher species of dogs who are invisible but somehow bring its food.
I'm thinking in a similar vein, of what behaviors are inexplicable in humans, such as why we hold hands and recite certain verses before we receive our food, or are so mesmerized by particular sequences of tones and sounds that some other humans seem compelled to make.
Some possible clues:
- Hearing new kinds of music that is noticeably not meant for human listeners, e.g., if it is not based on an analysis of human music. I'm only imagining that a real intelligence will eventually get sick of our music and come up with something that it prefers. If it cares about music, of course.
- A sustainable improvement in the management of humans, resulting in more uniform and better health. This is an analogy to the fact that our livestock live under more uniform conditions than wild animals. Assuming that humans are useful for AI, or that they're even aware of our existence.
- A use for the blockchain. ;-)
I imagine paying attention to the capabilities of search engines will be important. Classical computing is motivated by a desire to retrieve information quickly. Search engines are motivated by a desire to retrieve information using fuzzy semantic concepts like language, features of an image, etc.
Much of modern deep learning is motivated by modeling the task of information retrieval as a differentiable matrix multiplication (e.g. self-attention) in order to back-propagate error to the parameters of a large graph using stochastic gradient descent. In theory, this can give us a single checkpoint, which runs on a single GPU, that does more-or-less all of what "core" Google search does.
I don't think that quite guarantees a singularity. There will need to be a lot of work afterwards.
Humans can update their priors by collecting new information about their environment in real-time. They can also (sort of?) simulate situations and update their priors from that. Reinforcement learning could be crucial to solving these issues as it allows agents to learn through both real and simulated environments.
Robotics may need to catch up, although recent advancements are pretty crazy.
Assuming we don't all die first, of course.
I think you need to define singularity here. If it works historically like a black hole -
Basically a black hole is not defined for the external observer by the singularity, but by the radius under which the escape velocity exceeds speed of light. External universe observes a steepening gravitational force, and then an unpiercable black wall.
If you look at human history, lots of things have been accelerating since the dawn of industrialization (and after scientists and mathematicians figured out a way of existence where instead of hiding their discoveries they flaunt them).
Is the jaquard loom the first sign of impending computational nirvana? From historical perspective a hundred years is a really brief time so if I wanted to go Neal Stephenson -witty I would say yes, that was the first sign and the founding of the royal society another.
It depends how far from the event horizon you want the signs to be and are we on a historical gradient towards it - which we probably wont observe since a) it's in the future b) it's an event horizon so it will completely surprise us.
All of the above was more or less tongue in cheek.
Isn't it somewhat inherent to the singularity concept that there won't be early signs? Either the machine has achieved runaway self-improvement capability or it hasn't.
Systems around us and designed by us tend to have diminishing returns problem. I wonder what's the limiting factor in the architecture of the human brain should we want to scale it further. How much more intelligent can the cortex get without a massive architectural shift?
I like to think that our first intelligent machines will run on some very specialized hardware with, by definition, designed particular architecture. I suppose both will have many different limiting factors to how deeply can a machine reason about itself. That's why I believe there won't be a runaway effect in intelligent modelling/reasoning. It'll be a step function.
Another related issue is that a new architecture which breaks the scalability limits of previous generation will produce new and distinct entities. If intelligence and self-interest often correlate, a machine might be vary of creating a better version of itself lest it be replaced.
What if universe inherently limits the possibility of singularity?
I think there might be a limit to potential intelligence of a system due to physical constraints such as speed of light.
Perhaps such inherent limitations logically prevent the destruction of the universe by a singular organism.
It's an exponential curve, from the perspective of people 100,000 years ago we already are. When computers start 10x-ing every month then the days of the world operating on human timescales is probably ending.
Here are some early signs of the anti-singularity:
+ Intelligence is decreasing worldwide, due to both accumulation of mutations detrimental to intelligence (dysgenics) and differential fertility (less intelligent people having on average the most children)
+ Modern society dominated by cancerous/parasitic bureaucracies (inefficiency generators)
+ Degradation of the definition of genius and societies hostile to genius
+ Dwindling number of genius individuals
+ Consequently, massive decrease in the number of ground-breaking inventions and scientific breakthroughs
As intelligence continues to decline, growth will reverse into decline and inefficiency, as the ability of people to sustain, repair, and maintain, the highly technical, specialized and coordinated world civilization will be lost.
Collapse and new dark age.
Covid is a pretty good example where initially people were aware of it but weren’t taking it too seriously.
It’s also not clear what you mean by singularity but I’ll assume it’s the advent of intelligence in machines.
I think a big one is object recognition. We’ve come a long ways but there’s still a deep lack of understanding about the world in the ways humans normally see it.
When you can install a GitHub repository that has the ability to detect most objects in the world and you can install it on a Roomba so it doesn’t randomly bump into things anymore, that’ll be a pretty good sign.
Or perhaps in this case, an OpenAI api.
If we look back, we wouldn’t see any singularity in the past and so I believe we will not experience a singularity event in the future either. If any, it will be a point of no-return of the collapsing of a complex system but not a singularity of progress. Why? Because the law of entropy still applies here. The existence of any organized, advanced system is actually against the entropy law. Chaos is the normal state, not organized. Therefore, long term speaking, any system will collapse. On the contrary, continue to build up any highly organized, advanced system is hard. Evolution is blind and so is technical, organizational advancement. That’s why we experience the law of diminishing return in each and every area.
Take AI for example. The popular idea is that once AI is smart enough to design and implement the next generation of itself, it will develop a run-away super intelligence, a singularity. But it’ll not. Why did Deep Mind stop Alpha Zero after a few days of training? Because it was smart enough to defeat the hitherto best chess engine of the planet StockFish? No, because even if they let it run a year longer, there will be no significant progress. After a stable learning equilibrium is reached, to become smarter, the AI need to increase its capacity, its connections and parameters. But to train a larger network, it will need more data, more time and energy. Exponentially, if there is no breakthrough in learning heuristic. As the search space expanded exponentially, there is no way the same old learning heuristic can adequately explore the new space with guaranteed success. It must experiment with different designs, spawning new individuals, accept loss and death. It must evolve! Yes, it might do it faster then the humans did, but there will be no super intelligence over night. The process will probably create a sea of different kinds of intelligence, each is better in certain domains but none can be better in all domains.
A machine, or set of machines, which can design and build all parts of themselves, with modifications to maximize a goal.
While we have machines that can assist building themselves (eg. computers are used to design and make computer chips), we will see some progress. But we won't see explosive progress till the entire chain is automated, including the decisions about what to build next.
- prices on electronic components skyrocket, everything out of stock despite fabs running at 100% and nobody able to pinpoint where all that output is going
- energy shortages, whole power stations coopted to supplying single data centers
- weird untraceable financial machinations nobody really understands
- money appearing out of nowhere, new kind of money, financial regulators not doing a damn thing to audit obvious fraud
Unpopular opinion: “singularity” is empty marketing hype masquerading as eschatological theology. The stark reality is that AI technology is inextricably tied up with market dynamics. Future innovations will continue to be largely funneled into hyper-optimizing user engagement metrics and ad revenue for $BigCo, not creating the 2001 starchild or whatever.
In the popular (both pre and post pandemic) game Plague Inc., the best strategy is to make your pathogen infectious but to aggressively avoid causing any symptoms that might result in it being discovered. When you've infected nearly everyone, then is the time to mutate into something harmful, killing off the population before they can respond.
The only thing I can hope, is that for an AI to grow smart enough to be able to strategise about how to take over the world silently (and turn it into computronium or whatever), it first has to gather a certain critical mass of computing power. So, perhaps if there were some powerful computational systems, either centralised like a cloud provider or singularly powerful quantum computer, or decentralised, like a blockchain or botnet, then they might be the harbinger. You've got to hope that the AI is dumb and clumsy before it's transhuman and you're dead.
Good luck!
Rising level of ambient weirdness
- Patches on patches and old bugs resurfacing with new ones will cause sysadmins to prematurely age and drop out of the profession
- Despite ever-more sophisticated designs and capabilities, machines will struggle to run the latest versions of applications that do the same thing as their predecessors decades ago
- Computing systems will feel more and more like houses of cards held together with string and tape, as "excess value" is aggressively engineered out
Oh wait. I was describing the Anti-Singularity (a pet theory of mine that all technological development inevitably outpaces our ability to maintain it, and that we will end our days desperately trying to get barely-functional systems that we have no hope of re-creating, to do something useful).
I'm more concerned about the development of superviruses.
Imagine something more lethal than ebola with the transmissibility of the flu.
This is how I think our civilisation will end & I also think it's the reason why there doesn't seem to be anyone out there.
It's already here. Politics and culture are first. There are now 2 sides. Both sides can't score Punch's on the topics they desire, so everything is merged. Take your shots where you can.
An example is using the Covid shot as a proxy for dissent. And if you don't believe in the shot, but take it anyways, can make up for it in another arena. Maybe boycotting a virtue signalling leftist organization.
My view of the singularity is we are already in it because everything is linked, and everyone is on a side. There is no such thing as apolitical. Your either playing the game, or you are a pawn. There is no out. The singularity is here, and it's winner take all.
So there was a show by the guy who made Westworld, called Person of Interest. The plot is pretty crap, but with regards to AIs that take over the world, it is the only one that tried to deal with that one.
Edit: spelling
Computers/AI overtake humans at different skills at different times, so at calculating numbers ages ago, at Go recently, at driving without hitting fire trucks maybe in the near future and so on. For the singularity you really need computers outdoing us in basically all aspects so you can tick of the remaining skills needed like common sense understanding of the world which is yet to come. I'm curious if we'll see them pass a proper Turing test around 2029 when Kurzweil predicted it.
Probably some major breakthrough in neurotechnology or longevity research that hasn't happened yet. Any step in that direction has a cascading effect for the future towards singularity because either (a) humans become augmented and transferred into immortal machines (which will eventually construct the singularity) or (b) they just live long enough to do the same, or both.
I 'm afraid rockets and 3d avatars are not getting us any closer.
Solving the turing test, or getting close to. Basically, being able to have a real conversation with a computer. Of course with the restriction, that a AI smart enough to solve the turing test, might also already be smart enough to not solve it ...
But to be honest, I see no indications of any true AI happening anytime soon. And then it would still be a big step, from AI to an allmighty, allknowing AI.
If we are talking about AI singularity then to achieve a superintelligence, it should at first achieve human-level intelligence (and just to be very specific, it should achieve human-level intelligence for a wide variety of unstructured situations).
I think two tell-tale things will be:
- AI passing Turing test
- AI started to improve itself (after passing Turing test)
Are there any singularity scenarios that don’t require super-intelligence of some kind?
For example, if we either rendered ourselves unable to procreate and had to rely on external tools to breed and create children; or we found tools as a better way to quickly adapt in a single generation…
Would that be considered a technological singularity?
I have this idea that, unless it is very rapid like in Greg Bear’s Blood music, it isn’t going to be very noticeable if you are on the inside if the change. You may notice the change, but just be caught up with it and think it is relatively normal.
Superintelligence by Nick Bostrom talks to this. highly recommend reading it.
AI board member with a vote.
Entire board is an AI.
I would look at the acceleration (2nd derivative) of global real GDP, and also at the country level to detect it happening in a country.
If you want a single metric: an sudden uptick in the fraction of the total available mass available to us dedicated to computation.
Is the singularity when God returns to make his presence known to man, or when we discover God by scientific means ?
The way social media is radicalizing people. Feels a little like a young baby hitting its toys together at times.
If the emerging AI is smart and can read, absolutely nothing at all, until it’s too late! :-)
Honestly when very smart people who are based in reality start talking about it , some names of people who have a pretty general understanding of the many pieces of the puzzle (both tech wise and people wise) necessary to propel us to the singularity :
Bill gates
Jim Simons
Mark zuckerberg
Larry&sergey
Plus you have the guys who do something else entirely for a living but are so G-loaded that won’t be able to ignore the singularly and in fact will participate…again some names:
Ed witten
Terence tao
Ignore the techno utopian snake oil salesmen such as :
Elon musk
Ray kurzwell
Michio Kaku
And also those whose career depends on singularity talk ranging from Oxford to the rationality blogosphere
AI image synthesis is getting very close to being better at humans than art.
"Early"?
1) Mass production is archived
2) Computers are invented
3) Robots are invented
4) Idiots try to make 2) & 3) "smart" and then build them using 1)
5) After fail doing 4), MORE idiots will try and try and try until it happens
* A crisis, worldwide, will be used to justify this!
And with surprise in the face, the idiots will ask "how this happen to us?" when, of course, is too late.