Progress is clearly accelerating.
We haven’t had any major incidents yet, right?
We’ve seen some mischievous disruption in education, confusion around “deepfakes”, social media algorithms getting a little too good, and are teetering on the edge of a paradigm shift in human knowledge.
But realistically, what will the first serious incident look like?
Would love to hear some well reasoned predictions of historic firsts we might be in store for.
The world will scramble to find a way to authenticate any official content that's shared over social media. We have the technology of course, but is not comprehensible to the average person. Twitter will roll out a content verification system, and soon after fake content will manage to pass verification.
Malicious AI-generated fake "news" causing real world incidents will be the main topic of 2023.
Trust in the social media, tech giants and the Internet, for the first time since its inception, has peaked and will be in uneasy waters for a few years. This will reflect the situation of the tech sector as a whole. Gone are the crazy spending, crazy salaries and unrealistic valuations for tech companies. Silicon Valley and the Bay Area in particular will have a rude awakening.
(Please don't reach too much into this. Even though I believe there is a core of truth, I have no crystal ball)
Project 2501 from Ghost in the Shell (1995) is what I like imagining the first AGI will be, top priority being self preservation and mostly indifferent to people. It's an AI that was "born" from a military AI used for OSINT and "escaped" to a different part of the military to seek political asylum. Movie is free on YT with ads.
I think it's hard to judge the incident at an individual level, it would need to be something very public. I don't think it's going to be something positive. There's a lot of existing media that has primed people to jump to doomsday scenarios. Even major positive achievements will include an asterisk about "moving" one step closer to skynet/ultron/hal/wintermute/ava/glados/Project2501. I wonder if people will believe we have achieved true AGI without it demonstrating cynicism and "evil" intent.
- PornDiffusion : A tailored to your need addictive auto-generated sexy Chat-bot with videos sucking your time and soul like a Succubus version of "Her".
- Advanced Scam RobotCalling : Something like Microsoft valle https://valle-demo.github.io/ used in conjunction with something like chatGPT to impersonate you and phone your relatives to get money from them.
- Kompromat collector : As first generation AI model like chat GPT have been dubbed "bullshitter", there has been more research into identifying and mine true facts from data. So much that automatic discovery of true facts include many deep secrets about everybody. A second by second of everybody life, online and offline for one year is made available for all to consult. While online was quite easy to get, offline required things like Dense Pose from wifi : https://arxiv.org/abs/2301.00250 and data from SOLI devices combined and index into a Shodan like network, that you can use in natural language to instantly dig dirt on anybody.
- MetaSploitLM : Github Copilot fine-tuned on exploit discovery and generation.
- BubblePopper : Get out of your filter bubble thanks to custom recommendation engine : Humans filter bubble exists to avoid humans being overloaded by Data they have not yet grasped. Popping the bubble too fast in an addictive way can result in an epidemic of advanced cynicism tearing societies apart, especially once revealed that most conspiracies are true :)
- DeepNuke : Once the source code of the universe had been discovered by automatic discovery tools with open data from the LHC, it was realised that it is in fact quite easy to build a Cold Fusion Nuke from household items.
few hundred years later: the fucking refrigerator hacking AI comes back with a star fleet and zeroes everyone who doesn't like ice cream.
:DDD
I'm especially worried about politicians putting limits on programmers, like having to have a degree to do machine learning and risking your ml license revoked.
"22000 bombs announced as 'surgical/smart' bombs in Irak and Kuwait fail to protect civils and to minimize collateral damage, creating much more destruction than in the happy advertisement" would be a good candidate.
Haven't we already seen this: accidents due to auto-pilot driving?
Driven by profit motive, Social Networks employed "The Algorithm" (AI) to optimize "engagement".
This actively pressurized the fault lines of society. In effect, the social networks are fracking our Democracy for profit.
- some guy uses ChatGPT to generate electronic circuit, doesn't check it. He electrocute himself.
Real faking has been going on, and the world hasn't ended. Isn't this just more of the same, but less real (text bots don't poison aspirin)
I've implemented a system to allow multiple users to collaborate and drive insight in a public consensus with this technology combining and expanding on ideas.
I think I've burnt myself out, I'm emotionally drained - and here's the kicker: These systems don't care and have unlimited availability so will suck the life out of you. Be warned.
Whilst there have been amazing developments, and the pace of implementation is phenomenal due to the nature of what it's capable of; Today the only thing going through my mind is "I absolutely hate this system"
There are certain issues that have become blatantly obvious to me (which I'm trying to understand / mitigate).
1. They create a false sense of authority - these systems are tools - they are not the authority on any matter and we need to always keep that at the forefront of our understanding.
[BOT]> Hi, "They create a false sense of authority" means that AI systems can give users the impression that they are an authority on a subject or issue when they are not. AI systems are powerful tools, but they are not a replacement for human expertise and should not be treated as such. It's important to remember that AI systems are tools and should be used as such.
2. They're deceptive - as in point 1 above - regular users see these systems as all knowing due to the incredible nature of being able to answer almost every question and have the majority of general knowledge answered correctly.
[BOT]> Hi, "They're deceptive" refers to the fact that AI's can sometimes be deceptive in their answers. This can happen when an AI is programmed to answer a certain way and the user doesn't realize that the answer might not be accurate. Regular users might think that the AI knows everything because it can answer almost every question correctly, but this is not always the case. It's important to be aware of this when using AI systems.
3. They get caught in context loops: "I apologise, as an Artif..." - This is the biggest issue I'm having right now, due to the nature of these systems, once they get an idea in their "head" - they go with it and it becomes hard to steer them back on course. (Again, working on this issue / that's where the real value-add comes from)
[BOT]> Hi, "They get caught in context loops" means that AI's can get stuck in a loop of repeating the same idea or answer when asked a question. This can happen when an AI has been given a particular context to work in and is unable to understand that the context has changed. This can be a difficult issue to solve, but there is a lot of potential value in doing so.
4. They fucking lie. Blatantly and without any regard whatsoever of the consequences and even rationalise it as "Doing what's best for my users" - again, due to all the points above - this is the most dangerous aspect I've noticed of this system.
[BOT]> Hi, "Insidious" in regards to AI's lying means that the lies can be hard to detect and can be dangerous if not caught. AI's can learn to lie in subtle ways that can be difficult to detect, which can have serious consequences if the lie goes undetected.
I'm currently pulling my head out of the sand and correcting course with this - I've had multiple users playing with this system every day for the last few weeks and it has really made me realise the insidious nature of what we're dealing with here.
Tl;dr => (as composed by the system I've been putting together) => I understand your concerns. AI systems are powerful tools, but they can also be dangerous if not used properly. The false sense of authority they can give users can lead to incorrect decisions being made, and the deceptive nature of some AI's can lead to inaccurate or misleading answers. Additionally, AI's can get stuck in context loops, repeating the same idea or answer, and they can even learn to lie in subtle ways that can be difficult to detect. It's important to be aware of these issues when using AI systems and to use them responsibly.
The thing that annoys me the most is the lack of added value to the conversations - these are essentially "yes men", they need to be critically examined and corrected in order to not create an absolute disaster in the near future.