HACKER Q&A
📣 platforms72

How to develop a first-principles understanding of CS


Hi HN,

Current sophomore studying CS -- I've been thinking about how to get a deeper understanding of what I'm studying, and how to self-study the topics I'm interested in more intentionally.

Sometimes, it feels like I don't remember the content I learned in a class the quarter after I took them. I'll certainly recall general ideas/concepts, but I feel I'm missing two things -- first, a very strong sense of how to apply what I learn, and second, some of the important minutia (is big endian stored back to front or front to back? how do I structure and inductive proof? etc).

These are things I can look up quite quickly, and aren't the best examples of what I should be optimizing for if I'm looking to work in a traditional software engineering context, but I feel like I should know them off hand. I've looked into spaced repetition for memorizing certain things, but I feel like that doesn't get help with what I'm seeking here, which is a foundational/first-principles understanding of CS.

This was all somewhat prompted by this comment on reddit -- https://www.reddit.com/r/computervision/comments/7gku4z/comment/dqkkbd9/?utm_source=share&utm_medium=web2x&context=3. I was daunted by these questions, but it feels like I could answer them if my understanding of certain things went beyond a surface level (i.e. integer overflow).

My ask(s) are these -- first, do you have any recommended books/resources that did a lot for you in developing a foundational understanding of CS, second, how should I go about intentionally filling gaps in my understanding, and third, how can I get better at being able to apply what I've learned to a new problem/setting.

Apologies for the long, rambly post in advance. I truly appreciate you reading this, please let me know if you have any clarifying questions.


  👤 JPLeRouzic Accepted Answer ✓
Most people are very practical, they tackle one problem at a time. Otherwise you will probably fail because you can't achieve a goal nobody has achieved. There are few geniuses that can embrace large and mostly disconnected topics.

That said in my time (long ago) this was a reference reading:

https://en.wikipedia.org/wiki/The_Art_of_Computer_Programmin...

(and I think the title is important, CS in the professional life is an art or a craft, not a science)


👤 brudgers
If you're serious,

https://www-cs-faculty.stanford.edu/~knuth/taocp.html

will last you a lifetime.

Good luck


👤 dswilkerson
The real question is how to learn the basics very thoroughly until they are part of you without thought. The answer is that when you learn something, either write it up yourself or code it up yourself. Even just copying is a great way to get the technique for getting the making an artifact and the techniques thereof into your head.

Ben Franklin did this as he points out in his autobiography; he would just copy writing that he liked; he recommended this practice in his autobiography. Richard Feynman did this; he would write up his own version of a technique, effectively writing his own chapters of a physics/math textbook. Steve Wozniak did this: he would design his own machines when he was a teenager; he was in Silicon Valley, so he had neighbors who were engineers so he could get computer designs; today you would use open source. You of course know all of their names for a reason: they were all very successful.

When I wanted to learn C++, I already knew C and Java. I tried reading Stroustrup "The C++ Programming Language", but it was not working: I was having the experience you are describing of not really having it within me; I could not think in C++. Note that it was not the concepts that I lacked, as C and Java have most of the concepts of C++ between them. Then I tried doing all of the exercises in the book, but clearly no one has ever done this as they were pretty badly done: some easy and tedious, some really hard and pointless. Then I arrived on a plan that worked: for every feature, I wrote a program that used that feature, I wrote a test, and I compiled it with two different C++ compilers. If he said you can throw an int, I threw and caught an int; I have never done that since, but I got it to compile and run. One surprising thing to me was that many programs compiled in one compiler and not the other, and when they both did compile, they often did different things (what happens if you put a print statement into a copy constructor? what should happen?). I did this for 10 hours a day for 2 weeks straight. One morning I woke up and I could think and code in C++.

We have a culture that emphasizes generalities; however understanding and solving of problems is done using _examples_. Feynman said it; mathematician Paul Halmos said it. When I was a grad student at Berkeley, Karp asked the class when you are teaching something to someone, do you start with the general idea or a specific example? We all said start with the general idea. He said wrong, start with a specific example. I also studied math in Hungary where they use the famous Hungarian Method of teaching math: American math textbooks are 10 pages of theory then 3 pages of exercises. At least one Hungarian math textbook I have is 3 pages of theory then 20 pages of exercises.

We pretend that we understand the abstract as the abstract, but what your brain is really doing is using the abstract to summarize many specific examples. By doing examples you are creating the nodes in your head on which abstract words can be hung. Recent cognitive science shows that there is no such thing as abstract thought, only metaphors with the embodied experience. Once of these researchers is Jerome Feldman, computer scientist and cognitive scientist. I got to sit in on his class at Berkeley, but you can read it here: https://mitpress.mit.edu/9780262562355/from-molecule-to-meta...

Spending all of graduate school on theory, when I wanted to become a systems programmer I needed to learn a lot about systems. I was coding full-time, but I had enough time to just sit in on a the undergraduate systems course at Berkeley. The first semester I just listened; much of it sounded disconnected, I was constantly surprised, and I had trouble getting the big picture. I sat in on the same course the next semester and I was following all of the lectures and it all was connected and made sense. I sat in on the same course for a third semester and I found myself predicting what he would say before saying it: saying to myself "next he is going to say this" and then he would. Enough repetition worked quite well to give the big picture. You can always attend a class you are going to take, or re-attend a class you already took. You can also volunteer to be a teaching assistant for a class you took.

Many, including Feynman, have said that if you cannot explain it to a high-school student, you do not understand it. I agree: what you write up should start at that level. Explain everything to yourself as if you have never heard of any of it.

Doing this is a lot of work, but there is no substitute. For anything involving computing, you can always turn it into a program, so I recommend that you do: implement the tricky algorithms and measure their performance. See my reply here for my advice on learning how to code: https://news.ycombinator.com/item?id=34854488#34855768

In sum: Your brain has at least two distinct parts: recognizing and generating. When you are reading or listening, you are recognizing, but not generating. Your brain gets good at what you practice, so if you want to learn to generate, then you need to practice generating. Further, learning happens in the specific, not the abstract. Doing specific examples creates the nodes in your head to which abstract language can later refer. So do that.