The majority of people holds the belief that reading and understanding words cannot make AI equip with knowledge and intellence. But that is exactly the way how we learn.
Why is this contradiction?
You are asking people who are not smart enough to recognize that they are not informed enough, nor in most cases smart enough, to understand anything about the subject of your question, and they will be quite happy to answer. With endless rationalizations that seem plausible enough to them based on their prior associative model of the world. Just like an LLM.
It is difficult to identify the practical difference in output between LLMs and the "left brain interpreter", which is the source of almost the entirety of most people's subjective experience and their "comments on HN" output.
There is a difference between the spewing forth of gibberish, which is the normal human experience, and "intelligence" or "thought". If you haven't yet experienced that difference yourself, within your own lived experience, it would be probably impossible to convey to you in writing...but there is.
On a functional level, as well, it's simply unlikely that the models constructed to date have the capacity to develop what we would call "intelligence" via training. We'll almost certainly get there, but just feeding "books" into what we have today is not likely to produce "intelligence".
It's like saying 'it's just atoms and molecules bouncing off each other and sometimes splitting or joining', what do you mean 'storms, clouds, rain', or 'collections of them can make copies of themselves', etc.
It's Searle's old Chinese-room argument. That the machine doesn't really understand the words it's manipulating.
The standard argument (or maybe just my argument) against the chinese room is that if the 'symbols' are at lower level, audio, video stimuli, etc, and you have enough scale, then you can imagine intelligence emerging somehow, like flocking birds.
LLMs are trained at the language level, but even then, I think with big enough scale, they do in fact 'learn' concepts and form a 'world model' from the material they are trained on. How else are you able to ask for something in one language and it can output the results in another.
In some ways we can be grateful. Imagine trying to impose ethics on an 'intelligent insect', trained on much lower level inputs. These LLMs are being trained directly on human culture as input, and if we're careful with the input they will reflect the best of it.
But, here are some of my views on the topic.
- ChatGPT tends to please the prompter. If you ask it for philosophical content and start correcting it... it will yield to your views and make it so you are happy with what it tells you. Most philosophers I've read (few) would die on a hill with their idea and make their writing defend it from as many attacks as possible. What chat does is closer to what a salesman would do - tell you what to hear. If a human would do that to me I'd rarely consider it proof of learning/knowledge ability.
- ChatGPT tends to write good sounding philosophy. Maybe philosophers are taking us for fools and they're also just writing word salad half the time that sounds smart for the rest of us.
- ChatGPT tends to write good sounding philosophy/stories/sometimes code. Maybe it's just hard to differentiate knowledge from just semi-random-output.
If so... current AI doesn't read. It is just a mechanical parrot. It looks at the words in a book and learns how to determine the probability of word responses. That is it. It isn't learning anything, it isn't even reading anything.
A human reading a book teaches them how to think, opens their mind to new ideas and connections, and so much more.
Or do you mean a general AI and something in the future out of a science fiction book with sentience?
Here’s the difference…
Your modern “AI” is a linguistic hologram (potential wave fronts) resolving into your perspective via the implementation’s virtual machine (my distillation).
I call this level of technology feature driven automated statistics, though these offerings continue to impress and improve in iteratively new ways. Who knows how long Truth even stays the truth anymore?
Your brain is also a holographic rendering (millions of composite probably.)
The purpose of reading those books, is that YOUR holographic familiarity with the intellectual tradition thus far is enhanced. ETCHED by your internal experience of these tellings.
You START by catching up on 10,000 years of human thought and recorded experience. THEN your brow will be raised above that of your fellow, less “traveled” human.
And that’s it!
What does philosophy teach you?
Be happy poor, or know the world wants to eat and shit you, become a strategist by keeping your mouth shut and securing your own independence. (Or something otherwise only your life can compute.)
The end!
Does an AI really need direct experience of such things to be useful in the world? Or can it just take our (written) word that grass feels good between your toes and make any grass-toe related decisions on that evidence? Assuming we're just interested in having AI solve useful problems for us like curing cancer, I don't see the need for it.
It could still be that some important facts about people are missing from the literature, because we consider them too obvious to write down. That seems unlikely to me, given the amount of psychological research on trivial factors. But if we find a system making mistakes, we can probably add some written explanations to the training set it'll get fixed.
There's at least some good reasons to believe that through its training ChatGPT has acquired high-level models of our world such that it "understands" how to make accurate predictions.
But going further, I'd argue it also seems to have learnt how to reason. For example you can ask it logical puzzles and it will often get the answer right and explain its reasoning. Some will argue that it's not "real reasoning". which could be true, but we really don't know this with any certainty. All we know with certainty is that the output in many cases appears to have some of the characteristics of reasoning and understanding.
Ask yourself, does a fish "understand" how to swim? If you ask a fish to explain how it swims it couldn't do it. So a fish can swim, but a fish doesn't understand how to swim. So what does it even mean for an AI be able able read, but not understand how to read? Is it just that a fish doesn't understand how to fish like a human? Does this distinction even matter?
To summarise the point I'm trying to make here, there enough gaps in our knowledge and evidence to suggest to there is likely some amount of understanding and reasoning happening and that it would be arrogant to suggest otherwise.
But I suppose to more directly answer your question. ChatGPT certainly doesn't "learn" in any meaningful way from reading. AI's like ChatGPT simply doesn't have the ability to remember things, so it physically cannot learn from reading. It might understand and reason about what it reads, but it cannot learn from it. That is assuming you're talking about the implementation of the models and not the "reading" it does during its training.
You can use an AI to help supplement your knowledge of those things, but wisdom is only obtained through experience, reflection, etc. Something an AI will never be capable of. As famously said by Socrates, “Wisdom begins in wonder”. An AI surely can sound very wise though, but to stop reading because of this makes you susceptible to being reliant. Sometimes you need to trust your own instincts and inner voice.
So to equate reading to just being knowledge acquisition is missing the point of reading. And if the end result is all one care, by all means, use AI.
I think it partially has to do with the concepts of modality and grounding. A modality is a channel through which information can be conveyed. You probably learned early on about the 5 human senses: vision, hearing, taste, smell and touch. The grounding problem refers to the fact that any symbols (read: language) we use, usually refers to perceptions in one or more of these modalities. When I write "shit", you can probably imagine a representation of it in different modalities (don't imagine them all!).
Interestingly, large language models (such as ChatGPT) don't have any of these modalities. Instead, they work directly on the symbols we use to communicate meaning. It's quite surprising that it works so well. An analogy that helps understand this is that asking an LLM anything is much like asking a blind person what the sun looks like. Obviously they cannot express themselves in terms of vision, but they could say that it feels warm and maybe even light because it doesn't make any noise. It would be a good approximation and they would be referring to the same physical phenomenon, but that's all it is, an approximation. They could say its a large yellow/white-ish circle if they heard this from someone else before, but since the blind person cannot see, they have no 'grounded truth' to speak from. If the sun would suddenly turn red, they would probably repeat the same answer. My point being: you can express one modality in another, but it'll always be an approximation.
What's interesting is that the only 'modality' of these LLMs is language, which is the first of its kind so we don't know what to expect from this. In a sense, LLMs are simply experiments to the question "what would a person that could only 'perceive' text look like?". Turns out, they're a little awkward. Obviously there's much more to the story (reasoning, planning, agency, etc.) but I think its fundamental to your question why reading for humans and AIs (LLMs) is not the same: LLMs have such a limited and awkward modality that any understanding can only be an approximation of ours (albeit a pretty good one), hence learning from reading will be much different as well.
Hope this helps your understanding.
There may be a few reasons for this apparent contradiction. Firstly, some people may not fully understand how artificial intelligence (AI) works and how it learns. While AI can certainly learn from large amounts of text data, it is still limited in its ability to comprehend and apply that knowledge in the same way that humans can.
Secondly, there may be a misunderstanding about what is meant by "intelligence". While reading philosophy books can certainly enhance one's knowledge and critical thinking skills, intelligence is a complex trait that encompasses many different abilities, including problem-solving, creativity, emotional intelligence, and more. AI may excel in certain areas, such as data processing and pattern recognition, but it still falls short in many other areas that are critical to human intelligence.
Finally, there may be a cultural bias against the idea of machines possessing intelligence or knowledge. For many people, the concept of intelligence is closely tied to human consciousness and subjective experience, and they may find it difficult to accept that a machine could ever truly possess these qualities.
Overall, the apparent contradiction between the value of reading philosophy books and the limitations of AI may stem from a combination of misunderstanding, narrow definitions of intelligence, and cultural biases. It is important to approach these issues with an open mind and a willingness to explore the possibilities of both human and machine intelligence.
A) give them a multiple choice test
Or
B) ask them questions and have a discussion
?
ChatGPT is okay, maybe even above average at A, but if a person gave me the answers it regularly gives for B I would assume that person didn’t understand what they were reading.
If a human had memorised the same amount of knowledge, it would be a different story.
Reading is but the first step. One where the current crop of AI is stuck at. You don’t understand a concept unless you can generalise and abstract it to the point you can accurately apply it with different parameters and recognise correct and incorrect uses of it.
GPT does none of that. Ask it a simple arithmetic question and it will parrot what it has seen before. Ask it something it has never seen and it will have no idea how to solve it. Yet it won’t even tell you it doesn’t know, it’ll just make something up.
Same thing with philosophy or any other discipline. If you don’t speak Esperanto but have read and memorised a book written in it to the point you can recite it from memory, have you really learned what’s in the book? No, you have not.
I really wish people would think a bit harder about what they read instead of being dazzled by anything that looks impressive. We should’ve learned our lesson with the large rise of misinformation and scams in recent years, but it seems all we learned is that people are and will continue to be easy to fool. It’s a golden time to be a bad actor.