HACKER Q&A
📣 akasakahakada

AI read books, Human also. What's the difference?


Some never stop telling people to read philosophy books because they can obtain intelligence from there. But then they reject the idea that AI also learnt a lot philosophy text.

The majority of people holds the belief that reading and understanding words cannot make AI equip with knowledge and intellence. But that is exactly the way how we learn.

Why is this contradiction?


  👤 ergonaught Accepted Answer ✓
You are asking people who, as a rule, do not understand what they do, how they do it, or why they do it, to explain how they are different from something else they don't understand.

You are asking people who are not smart enough to recognize that they are not informed enough, nor in most cases smart enough, to understand anything about the subject of your question, and they will be quite happy to answer. With endless rationalizations that seem plausible enough to them based on their prior associative model of the world. Just like an LLM.

It is difficult to identify the practical difference in output between LLMs and the "left brain interpreter", which is the source of almost the entirety of most people's subjective experience and their "comments on HN" output.

There is a difference between the spewing forth of gibberish, which is the normal human experience, and "intelligence" or "thought". If you haven't yet experienced that difference yourself, within your own lived experience, it would be probably impossible to convey to you in writing...but there is.

On a functional level, as well, it's simply unlikely that the models constructed to date have the capacity to develop what we would call "intelligence" via training. We'll almost certainly get there, but just feeding "books" into what we have today is not likely to produce "intelligence".


👤 jacknews
I think people parroting the 'stochastic parrot' view of these large language models are failing to understand the scope for complex emergent behavior when these models have 100s of billions of parameters.

It's like saying 'it's just atoms and molecules bouncing off each other and sometimes splitting or joining', what do you mean 'storms, clouds, rain', or 'collections of them can make copies of themselves', etc.

It's Searle's old Chinese-room argument. That the machine doesn't really understand the words it's manipulating.

The standard argument (or maybe just my argument) against the chinese room is that if the 'symbols' are at lower level, audio, video stimuli, etc, and you have enough scale, then you can imagine intelligence emerging somehow, like flocking birds.

LLMs are trained at the language level, but even then, I think with big enough scale, they do in fact 'learn' concepts and form a 'world model' from the material they are trained on. How else are you able to ask for something in one language and it can output the results in another.

In some ways we can be grateful. Imagine trying to impose ethics on an 'intelligent insect', trained on much lower level inputs. These LLMs are being trained directly on human culture as input, and if we're careful with the input they will reflect the best of it.


👤 laserbeam
The biggest contradiction is between "some", "they" and "the majority of people". These are often not the same people. Talking about a nebulous group of people, or common sense, common advice, common knowledge is never concrete enough to build a good idea/theory about how some things work or do not.

But, here are some of my views on the topic.

- ChatGPT tends to please the prompter. If you ask it for philosophical content and start correcting it... it will yield to your views and make it so you are happy with what it tells you. Most philosophers I've read (few) would die on a hill with their idea and make their writing defend it from as many attacks as possible. What chat does is closer to what a salesman would do - tell you what to hear. If a human would do that to me I'd rarely consider it proof of learning/knowledge ability.

- ChatGPT tends to write good sounding philosophy. Maybe philosophers are taking us for fools and they're also just writing word salad half the time that sounds smart for the rest of us.

- ChatGPT tends to write good sounding philosophy/stories/sometimes code. Maybe it's just hard to differentiate knowledge from just semi-random-output.


👤 bndr
I think the main point is — the AI don't really read and understand, they read and remember the patterns of words, which is different from understanding. They see a chain of words often enough that it becomes statistically the next best output based on some input. (with some randomness in-between.)

👤 bwb
Do you mean current A.I.? Say ChatGPT?

If so... current AI doesn't read. It is just a mechanical parrot. It looks at the words in a book and learns how to determine the probability of word responses. That is it. It isn't learning anything, it isn't even reading anything.

A human reading a book teaches them how to think, opens their mind to new ideas and connections, and so much more.

Or do you mean a general AI and something in the future out of a science fiction book with sentience?


👤 inphovore
Hello, I am a human who tells everyone to START by reading philosophy books.

Here’s the difference…

Your modern “AI” is a linguistic hologram (potential wave fronts) resolving into your perspective via the implementation’s virtual machine (my distillation).

I call this level of technology feature driven automated statistics, though these offerings continue to impress and improve in iteratively new ways. Who knows how long Truth even stays the truth anymore?

Your brain is also a holographic rendering (millions of composite probably.)

The purpose of reading those books, is that YOUR holographic familiarity with the intellectual tradition thus far is enhanced. ETCHED by your internal experience of these tellings.

You START by catching up on 10,000 years of human thought and recorded experience. THEN your brow will be raised above that of your fellow, less “traveled” human.

And that’s it!

What does philosophy teach you?

Be happy poor, or know the world wants to eat and shit you, become a strategist by keeping your mouth shut and securing your own independence. (Or something otherwise only your life can compute.)

The end!


👤 tlb
I think an AI can learn everything it needs from books and other text. But others claim that it can never understand qualia like the feel of grass between your toes or the sensation of the color red without some direct experience.

Does an AI really need direct experience of such things to be useful in the world? Or can it just take our (written) word that grass feels good between your toes and make any grass-toe related decisions on that evidence? Assuming we're just interested in having AI solve useful problems for us like curing cancer, I don't see the need for it.

It could still be that some important facts about people are missing from the literature, because we consider them too obvious to write down. That seems unlikely to me, given the amount of psychological research on trivial factors. But if we find a system making mistakes, we can probably add some written explanations to the training set it'll get fixed.


👤 kypro
All this speculation about what an AI like ChatGPT can or can't do is largely unproductive and unknown. The truth is we have no idea what ChatGPT understands or how comparable whatever it's doing is to human understanding. There are a lot of reasons to think it's not the same, but there are also a lot of reasons to suspect that the neural net of AI's like ChatGPT are not just operating at the level of next token prediction.

There's at least some good reasons to believe that through its training ChatGPT has acquired high-level models of our world such that it "understands" how to make accurate predictions.

But going further, I'd argue it also seems to have learnt how to reason. For example you can ask it logical puzzles and it will often get the answer right and explain its reasoning. Some will argue that it's not "real reasoning". which could be true, but we really don't know this with any certainty. All we know with certainty is that the output in many cases appears to have some of the characteristics of reasoning and understanding.

Ask yourself, does a fish "understand" how to swim? If you ask a fish to explain how it swims it couldn't do it. So a fish can swim, but a fish doesn't understand how to swim. So what does it even mean for an AI be able able read, but not understand how to read? Is it just that a fish doesn't understand how to fish like a human? Does this distinction even matter?

To summarise the point I'm trying to make here, there enough gaps in our knowledge and evidence to suggest to there is likely some amount of understanding and reasoning happening and that it would be arrogant to suggest otherwise.

But I suppose to more directly answer your question. ChatGPT certainly doesn't "learn" in any meaningful way from reading. AI's like ChatGPT simply doesn't have the ability to remember things, so it physically cannot learn from reading. It might understand and reason about what it reads, but it cannot learn from it. That is assuming you're talking about the implementation of the models and not the "reading" it does during its training.


👤 mr_mitm
Not that I'm an AI expert, but it occurred to me that humans actually don't learn simply by reading books. You can read a hundred math books, but you will still suck at math if you don't do the exercises. Same with most other disciplines. I believe there is a value in exercising, making mistakes, asking for guidance, and learning by going through a feedback loop. This process has not been written down or recorded, so AI can't benefit from it. It has to do the exercises, too. There is a lot of knowledge (you could call it "experience") that isn't contained in books.

👤 thenerdhead
An AI is not going to be able to interpret the meaning of Jungian archetypes or the dilemma of Kierkegaard’s Either/Or. There isn’t just one interpretation as it is dependent on both your knowledge and wisdom.

You can use an AI to help supplement your knowledge of those things, but wisdom is only obtained through experience, reflection, etc. Something an AI will never be capable of. As famously said by Socrates, “Wisdom begins in wonder”. An AI surely can sound very wise though, but to stop reading because of this makes you susceptible to being reliant. Sometimes you need to trust your own instincts and inner voice.


👤 adamquek
One of the most important aspect of reading is reflection and imagination. Since reading is tedious, it is difficult for you to remain focus on the page or the words, and you have to deal with your own thoughts and directing your attention to the words, to how the words connect with your pre-conceived ideas, or let your mind wander around before getting back in it. Reading is thus essentially a meditation activities.

So to equate reading to just being knowledge acquisition is missing the point of reading. And if the end result is all one care, by all means, use AI.


👤 reliableturing
Interesting question indeed! There isn't much of a consensus on this as you can see from the other comments. Nonetheless, I spend much time thinking about this so I'd like to take a jab at it as well.

I think it partially has to do with the concepts of modality and grounding. A modality is a channel through which information can be conveyed. You probably learned early on about the 5 human senses: vision, hearing, taste, smell and touch. The grounding problem refers to the fact that any symbols (read: language) we use, usually refers to perceptions in one or more of these modalities. When I write "shit", you can probably imagine a representation of it in different modalities (don't imagine them all!).

Interestingly, large language models (such as ChatGPT) don't have any of these modalities. Instead, they work directly on the symbols we use to communicate meaning. It's quite surprising that it works so well. An analogy that helps understand this is that asking an LLM anything is much like asking a blind person what the sun looks like. Obviously they cannot express themselves in terms of vision, but they could say that it feels warm and maybe even light because it doesn't make any noise. It would be a good approximation and they would be referring to the same physical phenomenon, but that's all it is, an approximation. They could say its a large yellow/white-ish circle if they heard this from someone else before, but since the blind person cannot see, they have no 'grounded truth' to speak from. If the sun would suddenly turn red, they would probably repeat the same answer. My point being: you can express one modality in another, but it'll always be an approximation.

What's interesting is that the only 'modality' of these LLMs is language, which is the first of its kind so we don't know what to expect from this. In a sense, LLMs are simply experiments to the question "what would a person that could only 'perceive' text look like?". Turns out, they're a little awkward. Obviously there's much more to the story (reasoning, planning, agency, etc.) but I think its fundamental to your question why reading for humans and AIs (LLMs) is not the same: LLMs have such a limited and awkward modality that any understanding can only be an approximation of ours (albeit a pretty good one), hence learning from reading will be much different as well.

Hope this helps your understanding.


👤 pizza234
ChatGPT itself has a sensible answer:

There may be a few reasons for this apparent contradiction. Firstly, some people may not fully understand how artificial intelligence (AI) works and how it learns. While AI can certainly learn from large amounts of text data, it is still limited in its ability to comprehend and apply that knowledge in the same way that humans can.

Secondly, there may be a misunderstanding about what is meant by "intelligence". While reading philosophy books can certainly enhance one's knowledge and critical thinking skills, intelligence is a complex trait that encompasses many different abilities, including problem-solving, creativity, emotional intelligence, and more. AI may excel in certain areas, such as data processing and pattern recognition, but it still falls short in many other areas that are critical to human intelligence.

Finally, there may be a cultural bias against the idea of machines possessing intelligence or knowledge. For many people, the concept of intelligence is closely tied to human consciousness and subjective experience, and they may find it difficult to accept that a machine could ever truly possess these qualities.

Overall, the apparent contradiction between the value of reading philosophy books and the limitations of AI may stem from a combination of misunderstanding, narrow definitions of intelligence, and cultural biases. It is important to approach these issues with an open mind and a willingness to explore the possibilities of both human and machine intelligence.


👤 dwighttk
If you were going to asses the learning of a person who just read a philosophy text, would you

A) give them a multiple choice test

Or

B) ask them questions and have a discussion

?

ChatGPT is okay, maybe even above average at A, but if a person gave me the answers it regularly gives for B I would assume that person didn’t understand what they were reading.


👤 kif
The difference is that ChatGPT, having been trained with almost all the knowledge in the world, has a lot of simple things it still can't get right.

If a human had memorised the same amount of knowledge, it would be a different story.


👤 beepbooptheory
Has anybody prompted gpt with the Sokal Hoax and seen what it thinks of it? Im sure it would gladly read it and find some valuable things to take away, and become that much more intelligent.

👤 julienreszka
Zhile both humans and AI can learn from books, they may differ in the type of knowledge and intelligence they acquire and how they apply it.

👤 latexr
You don’t “obtain intelligence” from reading philosophy books. You don’t get it from reading any book, you need to think and understand the ideas you were exposed to.

Reading is but the first step. One where the current crop of AI is stuck at. You don’t understand a concept unless you can generalise and abstract it to the point you can accurately apply it with different parameters and recognise correct and incorrect uses of it.

GPT does none of that. Ask it a simple arithmetic question and it will parrot what it has seen before. Ask it something it has never seen and it will have no idea how to solve it. Yet it won’t even tell you it doesn’t know, it’ll just make something up.

Same thing with philosophy or any other discipline. If you don’t speak Esperanto but have read and memorised a book written in it to the point you can recite it from memory, have you really learned what’s in the book? No, you have not.

I really wish people would think a bit harder about what they read instead of being dazzled by anything that looks impressive. We should’ve learned our lesson with the large rise of misinformation and scams in recent years, but it seems all we learned is that people are and will continue to be easy to fool. It’s a golden time to be a bad actor.


👤 JaDogg
What are these books you speak of? I would like to read them. Thanks in advance.

👤 Brian_K_White
I say 'hello', an mp3 player also, what's the difference?

👤 noob_eng
Humans read books, AI don't "read" books. Whatever they do, isn't reading. Because humans themselves don't understand what the process of reading involves. So they can't built something that reads. Simple.