He came back a few days later and said that chatGPT knew what was in the book - was it okay if he just read the summary?
I wasn’t sure what to say. It’s probably true that chatGPT can summarize all the main points from the book. But it has always been easy to find the key points from books on line. The hard part of being a manager is figuring out how to take the obvious instructions and act on them consistently.
Maybe some people can do that just be reading the summary. For me though, reading the whole book is important. I find myself thinking back to the examples used to illustrate the points. And I find that repeating ideas in different ways as I read the book helps make them part of my mental framework. I read lots of interesting ideas in quick articles, but they rarely stick with me unless there is a specific translation to action.
I ended up telling my colleague that it was up to him to decide how to learn best. If it was me, I need the book. But he needs to know his own learning system.
Short forms have always been available, it be blog posts, Wikipedia articles, cliff notes, or other such things. Books survive, because source material is needed to generate all of those other things, and those short form versions don’t cut it for everyone. I don’t see LLMs as any different.
A book can tell you something you didn’t know. With an LLM you need to know enough to ask.
You will have shallower knowledge than the person who reads good source content.
Do you want broad but very shallow knowledge?
I’d much rather cast a deep net that AI slop can’t touch.
I say this as someone that has used most LLM tools. They are tools, not replacements. And they are remarkably shallow but great at appearing “magical”
I do not use it myself because I am a researcher and I often ask questions that don't have a lot of "training data" yet. And even if an area is well covered in terms of "training data", often there is a lot of "know how" that really isn't written down in an easily digestible form. It is passed verbally or through examples in person. So the idea that the "training data" is complete is also not true in general.
Many other people in this thread have already covered that books are much more structured and organized than any answer generative AI gives you. Let me discuss another reason why books still matter. Books can give you a wider view than the "consensus" that something like ChatGPT gives you. I know a lot of books in my field that derive results in different ways, and I often find value in these different approaches. Moreover, suppose that only one book answers the question that you want answered but others gloss over that subject. Generative AI likely will not know precisely what one random book said on the subject, but if you were searching through multiple books on the subject yourself, you likely would pick up on this difference.
Relevant Paul Graham quote [1]:
> We can't all use AI. Someone has to generate the training data.
'Ten things to know about being a manager' and similar aren't specialist books.
What’s with this bloody obsession of killing other products and industries? Every time someone farts in tech, everyone starts shouting that it just killed something else. Calm down. Relax a little bit and get some perspective. You’re drowning yourself in the Kool-Aid.
LLMs did not kill the book industry, just like Bitcoin did not kill the world’s financial system.
The feel of quality paper.
The way the spine cracks when you first open a book.
The way the spine creases after you've read a book a few times.