HACKER Q&A
📣 avajen

How to Get Perspective on Latest AI/ML Developments?


It seems like recent developments in AI/ML (DALL-E 2 et al.) have been quite dramatic. Commentary ranges from art is dead, to art is now more empowered than ever. As someone who is an outsider to the field (although perhaps effected by developments in the field), I wonder what I should do to get a level-headed perspective? Are there experts who give honest opinions semi-regularly that I can follow? Has this all been considered somewhere already in the philosophy of AI? Or is there a way I can gently gain enough understanding to parse the latest developments myself and draw my own conclusions?


  👤 IceMetalPunk Accepted Answer ✓
AI progress is accelerating immensely, at least since 2017's invention of transformers, if not earlier. And that puts us in a weird transitional era, where the high level architecture of AI is known, but its low-level data is a black box, and its performance is in the uncanny valley of "good enough to be human, but also bad enough to definitely not be human". As such, there's a lot of discussion that's more philosophy and economic hypotheses than definite science.

The philosophy comes from common questions about whether these AIs, with their human-like behavior, are actually capable of thought, understanding, or consciousness. And there are no definite answers there, because philosophy has long known that those words are ill-defined. If you look into the age-old "P-Zombie" thought experiment, it shows that we can only really decide if something is conscious based on its behavior, so if an AI behaves like a conscious being, we have to call it one. The question of "what does a conscious being behave like?" is more complicated, in part because we don't really have a clear understanding of what beings are conscious.

The only entities we're pretty sure are conscious are humans, so we often use ourselves as the measuring stick. These AIs don't quite behave like humans in many cases, but they do quite behave like humans in many others. And even in their failure cases, they often fail like humans or facsimiles of humans. It could be argued that consciousness and understanding are not binary, but actually continuous spectra -- a dog, for instance, may be conscious, but less conscious than a human, and that's certainly true of its understanding. So in such a framework, when taking the P-Zombie concept into account, it may be most honest to say that current AIs are, in fact, conscious and understanding entities, but perhaps less so than humans are. With that in mind, all research on the latest AIs seem to suggest that scaling them up (in terms of number of parameters/weights) increases performance without any foreseeable plateau, so there's really not much reason to think AIs won't soon reach the level of the human mind.

In terms of the economic consequences of AI... well, I'm no economist, so take anything I say on the topic with a pile of salt. But I do think the major advances have been happening in the last 5 or so years (since it seems almost every bleeding-edge AI these days incorporates a transformer somewhere in its architecture, if it's not entirely based on them), so it may be too soon to know for sure what the impact might be. That said, I tend to dismiss people who yell that "AIs are takin' ar' jerbs'!" on historical context alone. Every time there's a new technology, industries shift. People yell that they're losing their jobs, then eventually the tech ends up complementing their jobs instead. In the cases where it truly does remove jobs, it creates new jobs in a different industry. That's been true at least since the invention of the printing press, so I see no reason to think AI is any different than any other invented tool. DALL-E 2, for instance, probably won't replace artists, but it will speed up and enhance artists' ability to quickly prototype ideas and get stylistic inspiration. And more importantly, art can't die just because tools can make art; if art dies when it's no longer profitable, then I wouldn't consider it art in the first place, as I think art is meant to evoke pleasure or emotion, not to make a buck.

But honestly, in the end, I wonder if "it'll take our jobs!" is even a valid complaint. Obviously the world is far from ideal, but shouldn't the ideal be a world in which "jobs" are taken for pleasure rather than required for survival? If we have the tools to do the work, shouldn't that free us up to enjoy our lives instead of driving our every decision on "how many bits of data will your bank send to mine?" Again, I'm fully aware this is an incredible ideal that's unlikely to occur any time soon, but perhaps instead of being afraid that tools will take jobs away, we should use those events as a wake-up call that maybe society shouldn't require jobs for survival when we have tech to do it for us? At what point does "do something for me that our tools can't, or else you should starve on the street" fail to sound reasonable to us as a culture?

Anyway, that's my idealist rant over.