If you are someone from that gang, what did you learn from SICP that really caused this transformation? What did it do differently than other programming languages or CS books?
Have you had similar experiences with other books (Lisp based or not)?
Perhaps the question needs the qualifier of whether you've encountered lisp before or not.
> ... What did it do differently than other programming languages ...
Ah. I don't think anyone would ask this if they knew lisp.
At the very least, you can approach languages like JavaScript, C, C++, Python, (etc.) with broadly the same brush where you're writing sequential code that essentially modifies structures in place. -- Especially with C, it's easy to imagine some kind of machine executing the statements. (Obviously, exact idioms vary between languages, etc.).
Whereas with lisp, you don't modify structures in place, and you're not writing code as sequential statements.
An example I still think is kindof neat from the SICP was writing the same functionality both as a 'low-level' recursive function, as well as making use of higher order functions to achieve the same thing.
> Have you had similar experiences with other books
I liked the ideas suggested in "Philosophy of Software Design".. it really emphasised the differences between an interface and its implementation; and how complexity arises from if the you need to know more than the interface provides, or if the interface requires more than it needs to.
If you want your mind blown, read "The Little Schemer" (or the original "The Little Lisper," or another of its variants). I particularly like the section near the end where the Y combinator is (secretly) described. The goal of that book is to "teach you to think recursively." It's not about writing recursive functions at work, but about appreciating the self-referential properties of algorithms.
[1]: https://www.youtube.com/playlist?list=PL8FE88AA54363BC46
It builds from first principles (among other things):
* (the core) the lisp metacircular evaluator
* computer algebra
* logic programming
* a register machine
* a compiler targeting that register machine
(lazy evaluation, streams vs. lists, concurrency, and more)
I can't think of any other book in the undergraduate canon with that kind of breadth.
Personally I didn't really learn any of this from SICP: I think I'd already had the whole "programs are data" epiphany from learning Haskell and reading papers before I read SICP. Still, it's really fun to see so much collected in one place.
(I think the fun part of learning new programming languages is the concepts they teach you. e.g. in my opinion there's another fun "reasoning is a syntactic operation" moment you get from learning a language like Coq.)
It's been a while since I looked at it but I remember enjoying "Concepts, Techinques, and Models of Computer Programming (van Roy and Haridi)" for reasons similar to why I liked SICP. (It's a little less broad, but has the same subject of ~"here are a bunch of different models of computing and how they are all related by the same fundamentals").
I found I got more from it the second time I read it after I had some experience programming. The first time I had almost no experience and was caught up in everything it was trying to teach and missed a lot of things.
SICP has an extremely short description of syntax at the start, and how it is parsed. The rest of the book is about the structure and interpretation of computer programs. You just build structure by building abstraction after abstraction using the same basic syntax for everything. You interpret by referring to the parsing order. The language fades into the background - at no point are you referring to your syntax cheatsheet, it is all the same.
Instead you think about how to reason. And how to transform that reasoning into structured programs.
Ask HN: Is SICP/HtDP still worth reading in 2023? Any alternatives? - https://news.ycombinator.com/item?id=36802579 - July 2023 (98 comments)
It, and by extension Scheme, showed me the power of lambda calculus for the first time.
Couldn't really mention much else about that book today to be honest, though I am sure it exposed me to other ideas as well. Just can't remember.
But I had forgotten many of the details. Here is just one: in 3.1.1, Local State Variables, SICP shows how to, in effect, define classes and create objects using only function definition with lambda and set! (that is, assignment) in the definition body. The function you define this way is a like a class, the functions it returns are like instances. So you can do all this without any specifically object-oriented language features. This occupies just a few pages near the beginning of a long book - the whole book is dense with worked-out ideas like this.
I would add that the cumulative effect of all these examples in SICP is to demonstrate that you can solve any programming problem from first principles - that is, by combining a few simple but powerful constructs in the several ways they describe. Moreover, a solution constructed this way might be simpler and easier to understand than a solution made the more usual way: by looking for a specialized language construct or library that seems to offer an already-made solution to the problem.
(This is copied from a comment I made here several years ago: https://news.ycombinator.com/item?id=8495934)
1. It taught me to think of programming languages as constructed things, rather than just being "there" and immutable. In particular, to think about the purpose of each language feature, and separating features into core primitives vs. syntactic sugar that could be reduced to those core primitives. It becomes harder to do this for "complicated" languages like C++ and Python, because many of their language features are not for one purpose but bundle together lots of disparate ideas. But it's still worth making the effort to see what choices the designers of those languages made, and how different choices would give you different points in the space of programming languages.
2. Once you have a small enough "core language", you can start thinking about algorithms that operate on programs. I had always thought of programs as expressing algorithms that operate on data, but didn't think of programs as data themselves. Of course every program is text, but in complicated languages with many constructs it seems horrendously hard to write programs that operate on other programs and change their meaning. But if your language desugars to a simple core language, the ability to transform programs opens the door to techniques like automatic differentiation of programs, probabilistic programming (like Church), and many others. These would now be called domain-specific languages, but those are often thought of as limited toys for very specific tasks. Instead, I like to think how every task has the "right" language to express it, and when you do API design you're really embedding a DSL in some "host language".
Since you asked, the other big development in my approach to programming came from learning Haskell and embracing types as an elegant way of expressing universal truths about programs. Sometimes the type of a function fully determines what the function can be, like how the only function of the type (forall a) a -> a is the identity function; this idea becomes more revelatory with more complicated types.
I don't use functional programming very much in my day-to-day life, but I think learning these ideas has shaped my approach to every language. It's not really about the superficial "functional patterns" that people often think of, like using map and filter and such, which is often idiomatically wrong in many other languages.
I don't think it "blew my mind" or opened up a new world to me though, but learning how to solve problems with basic tools was fun.