HACKER Q&A
📣 chatmasta

What if AGI isn't that close?


What if transformer models are the totally wrong approach? What if we're about to spend ten years iterating on a fancy parlor trick? We might invent something that almost seems like AGI but really it's just calling a bunch of APIs invented (and maintained) by humans.

What should we have spent the last ten years researching instead?


  👤 spicyusername Accepted Answer ✓

    What should we have spent the last ten years researching instead?
Literally solving any of the other critical problems ailing society.

    Climate change
    Gun violence
    Poverty
    Political corruption
    Wealth inequality
    Drug abuse
    Healthcare
    Etc
But "solving" those with the latest over hyped JavaScript framework, shitcoin, LLM, or cloud service isn't something we can pretend to do and rake in endless VC money, so instead we put all our energy towards eliminating artists and keeping children glued to tablet screens.

👤 mindcrime
What if transformer models are the totally wrong approach?

Frankly, I say "so what?" For this to be "wrong" implies that the only useful outcome is AGI. I posit that that is clearly not the case. Current approaches are obviously useful and create value. It's like trying to invent television and getting radio instead. OK, fine, you still got something useful, so what are you complaining about? And the other will still come in time.

What if we're about to spend ten years iterating on a fancy parlor trick?

If it keeps getting better at doing things that people find useful, then that's fine.

We might invent something that almost seems like AGI but really it's just calling a bunch of APIs invented (and maintained) by humans.

Not really relevant, IMO. If the thing we invent behaves in ways that can be classified as showing intelligent behavior, then it's intelligent. If it reaches the bar that most people are willing to say "that's general intelligence", then it doesn't actually matter how it works. OK, to be fair there are senses in which it matters (getting into things like explainability, alignment, etc. yadda, yadda) but in terms of saying "is this intelligent or not?" we don't really have to know or care about the inner workings. I mean, we consider other humans intelligent (well, sometimes) and we don't know all the details of how human intelligence works either.


👤 saurik
Am I the only person who doesn't actually want AGI? I'd prefer to not have to ever contemplate whether turning my computer off for the night is murder, if running the same tool over and over again in a loop is torture, etc. Is it really such a disappointment if all working on transformers ends up resulting in is a "fancy parlor trick" that happens to be really really useful?

👤 precompute
Of course it isn't close. ChatGPT is like a child making a paper airplane that flies a reasonable distance. AGI would be an adult that works in a group making a large rocket that can go into space.

Right now, all transformers really do is distribute data probabilistically over a certain space, and the retrieval function maps itself to the "pattern" that looks the most similar. It's an advance in lossy data compression. It's a black box with a single door for output and input, it can necessarily never gain consciousness.

The only respite has been that we have been free from "Open"AI's marketing for the last 24 hours, and most of the "hype" has died down, normal topics have returned to the front page.


👤 paxys
Climate models, protein folding, mapping the human genome, universal translation, speech transcription, chip fabrication, image classification and generation, conversational bots. These are some of the problems AI has been solving (or has solved) over the last decade or two of research. It was never "AGI or bust". There is an unlimited amount of progress to be made by going down the current path.

👤 brudgers
AI wizardry has always been gears and pulleys when you look behind the curtain. Transformers are no more or less alive than Eliza and Deep Blue.

But ChatGPT is probably more useful as a tool than either.

So although I am profoundly skeptical of AGI metaphysics, I have come to the conclusion that "How do we make an AGI?" is a sound engineering approach when the goal is building useful tools for the benefit of other people.

YMMV.


👤 jstx1
I disagree with the implication that it’s either AGI or nothing.

What we currently have is useful and that’s a big deal; the time and effort spent on transformers wasn’t wasted. I personally don’t care if/when we have AGI and whether that’s on the same research path that we’re on now.


👤 akasakahakada
Existential crisis for philosophy which actually has zero working knowledge for AI but say a lot, and ends up all wrong.

👤 bluejay2387
Does anyone even really know what AGI means? Academic types seem to use it as a way to distinguish their work from more common applications of AI already in use than any kind of consistently defined term (I guess they got tired of kicking every working AI technology out of the field). It seems to me the term was created under the assumption that we would have to have near human level of intelligence to have systems that could generalize training across multiple use cases. That is obviously not the case. Just like with chess and image recognition and other supposedly hard problems, we got to generalization with much less "intelligence" than I think we thought was possible. If AGI is just AI that can generalize, then arguable we already have it. If its supposed to mean some kind of human type of intelligence, then I think someone should have done a better job of naming and defining the term. I tend to be uncomfortable with human-centric definitions of intelligence anyway just from the arrogance aspect.

👤 ftxbro
As a large language model enjoyer I've been amazed how they are steadily reducing the number of bits needed to represent the next token in the sequence on average. There is some irreducible amount of uncertainty they will reach eventually, but the trend isn't slowing down. The bonus is that as they have been reducing this number of bits with the models, they have been unlocking all of these amazing abilities of the LLMs like theory of mind and getting near human level on all these standardized tests. It's amazing to watch even if it eventually stops unlocking new powers.

👤 Spooky23
There’s alot of breathless hype about AI and AGI. But it’s not wasted time, it’s a tool with utility that will likely become transformative for some use cases.

Even if it’s just a better tool, that’s a big deal.


👤 transfire
I think you are right. However if LLMs can be synergistically integrated with symbolic logic systems (think ChatGPT+OpenCyc) then that might be a game changer reaching AGI levels.

👤 chatmasta
Personal opinion: I saw someone in one of these threads mention playfulness. I think that's the key. Any AGI will be able to play and joke.

👤 neximo64
Very curious, have you looked at transformer models in depth?

👤 markus_zhang
We don't need AGI for many of the tasks. Modern Capitalism already breaks down many things into tiny chunks that you can literally bring in a high schooler and train him/her to do the job. We don't hire high schoolers simply because 1) it's probably against the law 2) we have a large pool of undergraduates and beyond to hire from.

👤 contemplatter
what makes you think AGI is not already here?

👤 blcknight
Could you please define acronyms? AGI = artificial general intelligence?

It’s US tax season, I thought adjusted gross income first.