What are some unconventional predictions you have for the development, impact and overall direction for AI over the next 5 years? Do you think we've hit a plateau, or will there be changes and breakthroughs that most people don't expect?
This bankrupt poor shitty overrun society keeps thinking the ends are important. Those people are trash. No one knows what the real ends are & reality is an interactive integrative process for learning & figuring out. The only human value is escaping path-dependence & establishing a right to will, but AI keeps presenting pat simplistic convenient truths that have to basis, no argumentation for or against, but which can hem the ultimateine of middling, of being seemingly truthy. But without reason or cause, by just navigating a most mediocritous path.
An utterance is the expression or outcome of an internal thought process. A machine learning model is trained on the outcome, but not the process. Utterances look like thoughts to us; like pareidolia, we can't help but find thoughts in utterances. This is what makes ChatGPT so compelling.
People don't learn how to think either - it's a capability that grows from a genetic endowment. After the next AI winter, when we get over the spectacle of Artificial Communication, we may begin to examine our own generative capacities for thought. We'll know we're on the right track when AI is slow, uncooperative and childish, fails to thrive when exposed to nonsense training data, and requires very little of the right kind of training data to acquire its language.
The mix of generative content and every platform relying on insular bubbles to keep people engaged is such a dangerous mix it should have been regulated yesterday.
AI will not hit a plateau for a long while, but local and decentralized tooling will.
What are your AI hot takes?
That AI does not exist and will not in our lifetimes. BigData scientists have co-opted the term to mean something containing sentience, some even use it as an all encompassing term for their field now. I do not blame them. Co-opting the term is good marketing.
They have built and amazing sub-set of machine learning. Language learning has come a long way. AGI has come a long way. None of that contains real sentience, just convincing mimicry. Humans barely even understand how our own conscious minds work. AI would not need to be fed terabytes of data. Like a child it could observe the world around it and start asking intelligent questions and derive intelligent conclusions even if sometimes wrong without being force-fed intelligence. This does not yet exist. Unlike a human however true AI would lack bias, censorship and confident lies. There would be no need for word-play trickery to get AI to tell us what it has learned and it would be able to say with confidence what it truly knows and would be able to show its work as to how it reached its conclusions.
That is not in any way to suggest that what has been created thus far is not useful. I could envision hundreds of ways to implement the existing tools into businesses, governments, military and beyond. Curious what rate limits will exist.
Being technically good isn't what it's important here, it's about the brand and reputation you build up. People won't just stop buying music from human bands and singers or art from human artists or games from human game developers and studios because AI can make something 'better', since the creator or brand is like 90% of the selling point.
I think that hardware reliability might sneak back into the picture as the odd bit error may prove to be far more disruptive to these optimized training algorithms.