What should we have spent the last ten years researching instead?
What should we have spent the last ten years researching instead?
Literally solving any of the other critical problems ailing society. Climate change
Gun violence
Poverty
Political corruption
Wealth inequality
Drug abuse
Healthcare
Etc
But "solving" those with the latest over hyped JavaScript framework, shitcoin, LLM, or cloud service isn't something we can pretend to do and rake in endless VC money, so instead we put all our energy towards eliminating artists and keeping children glued to tablet screens.
Frankly, I say "so what?" For this to be "wrong" implies that the only useful outcome is AGI. I posit that that is clearly not the case. Current approaches are obviously useful and create value. It's like trying to invent television and getting radio instead. OK, fine, you still got something useful, so what are you complaining about? And the other will still come in time.
What if we're about to spend ten years iterating on a fancy parlor trick?
If it keeps getting better at doing things that people find useful, then that's fine.
We might invent something that almost seems like AGI but really it's just calling a bunch of APIs invented (and maintained) by humans.
Not really relevant, IMO. If the thing we invent behaves in ways that can be classified as showing intelligent behavior, then it's intelligent. If it reaches the bar that most people are willing to say "that's general intelligence", then it doesn't actually matter how it works. OK, to be fair there are senses in which it matters (getting into things like explainability, alignment, etc. yadda, yadda) but in terms of saying "is this intelligent or not?" we don't really have to know or care about the inner workings. I mean, we consider other humans intelligent (well, sometimes) and we don't know all the details of how human intelligence works either.
Right now, all transformers really do is distribute data probabilistically over a certain space, and the retrieval function maps itself to the "pattern" that looks the most similar. It's an advance in lossy data compression. It's a black box with a single door for output and input, it can necessarily never gain consciousness.
The only respite has been that we have been free from "Open"AI's marketing for the last 24 hours, and most of the "hype" has died down, normal topics have returned to the front page.
But ChatGPT is probably more useful as a tool than either.
So although I am profoundly skeptical of AGI metaphysics, I have come to the conclusion that "How do we make an AGI?" is a sound engineering approach when the goal is building useful tools for the benefit of other people.
YMMV.
What we currently have is useful and that’s a big deal; the time and effort spent on transformers wasn’t wasted. I personally don’t care if/when we have AGI and whether that’s on the same research path that we’re on now.
Even if it’s just a better tool, that’s a big deal.
It’s US tax season, I thought adjusted gross income first.