HACKER Q&A
📣 iamwil

How does Common Lisp deal with side effects?


In the realm of pure functional programming, like Haskell, side effects are managed. Where the effect is described by the IO monad, and sent off to the runtime to actually execute. This is intended to limit the complexities of side-effects and keep the program purely functional.

How does Common Lisp deal with the side effects? My understanding is that it's not pure functional, so you can mix and match. How come Common Lisp doesn't get bogged down in a quagmire of complexity like managing state in other programming languages end up being? Is there a design pattern that's common. Is there some power in Lisp that enables programmers to contain it well? Or is it that Lisp programmers are usually experienced enough to contain it on their own no matter which language they program in?


  👤 throwaway81523 Accepted Answer ✓
At the end of the day, CL is an imperative language with some functional-ish features. Most CL compilers support tail recursion elimination but the CL standard doesn't require it. LOOP (subject of many amazing rants) is idiomatic. There were/are so many different mutation/update functions that SETF was developed to give them a unified API. It uses mutable arrays and CLOS objects with their setters and getters extensively. One idiomatic way to build a list is tail-first, consing onto the front, then NREVERSE the result, putting the elements into the right order by destructively reversing the pointers. The other way (done with LOOP ... COLLECTING) e.g. is to actually append using RPLACD (set-cdr! in Scheme, i.e. clobber the CDR of the last list item to make it point to a new one). You usually wouldn't do stuff like that in Haskell (look up Haskell DLists though, omg).

I'm not trying to bash CL by saying the above, but only note that it was standardized in the early 1980s to codify Lisp practices of the 1970s, an era in which memory and compute cycles were quite expensive. Programming style reflected that, using mutable variables and bit twiddling even though it was Lisp. It was before my time, and CL has never really excited me that much, though there are nice things about it.

I haven't used Clojure but it is much more modern about supporting functional data structures and idioms. If only it didn't have that Java influence...


👤 dragonwriter
> How does Common Lisp deal with the side effects?

It doesn't, systematically.

> How come Common Lisp doesn't get bogged down in a quagmire of complexity like managing state in other programming languages end up being?

Most programs in most programming languages don't, in fact, do that.

They often have bugs from not trying hard enough to manage state, though.

> Is there a design pattern that's common.

Sure, seveal, but they are the techniques you learn in any introductory class in any structured programming language, though there's a good chance they were developed in a Lisp-family language.


👤 isaac21259
I would assume that it manages it the same way every other impure functional language manages side effects. I assume common lisp would be quite similar to ML, Ocaml, and Scheme in the way it manages side effects. Those languages don't make side effects part of their type system and more or less leave it up to the programmer to not make mistakes. Note that I've never done anything of significance in common lisp so this is just a best guess.

👤 faoileag
Like others said before: CL is just a programming language that can be used for functional programming, but doesn't enforce this.

So you have to be disciplined. I stick to the following rules:

* have a (main) function

* prefix io functions with io- and only use them in (main) (inspired by Haskell)

* No loop. Always recursion.

* No variables. No local ones and definitely no global ones.

That way, you have to think more in functional terms than in imperative ones.

Helps me. Your mileage might vary.


👤 JHonaker
It's striking how many of the responses are just trying to flex their knowledge of FP.

The real answer is that Common Lisp is not a pure functional language. Many CL idioms involve mutation, the most obvious one being setf.

So, it manages it like Python or JavaScript manages, by not managing it.


👤 derbOac
I thought this had something to do with the lazy vs strict evaluation of haskell vs lisp.

https://www.seas.upenn.edu/~cis194/fall16/lectures/07-lazine...


👤 1MachineElf
Monads are possible in Common Lisp, although not widely used. If you haven't already, check out this old discussion about it from 2013: https://news.ycombinator.com/item?id=6398393

👤 medo-bear
i won't repeat what others have pointed out about cl not being a pure functional language (it doesn't even try to be), but i will refer to you coalton [0], which is a functional haskell-like (monads included) language built on top of common lisp

[0] https://coalton-lang.github.io/


👤 ACow_Adonis
I think you'd have to be a bit more specific about what you mean by "manage" and the specific use-case/management of side-effects that would be required. I'm not personally convinced that it's been shown that purely functional languages provide comparative utility when comparing programs of similar length and complexity for non-functional ones, and I'm personally pretty sympathetic to pushing people towards "functional style" (as defined by me) because it's just a good way to program.

I think giant programs that have to interact with the real world are nightmares regardless, myself. And small programs written in a value-passing functional style by competent programmers don't really have very many issues with managing state, in my experience.

The short answer is Common Lisp isn't really "functional": it doesn't manage side-effects. You can just go imperative/side-effect crazy if you want. Just "setf all the things" or use CLOS for objects everywhere (i almost never actually use CLOS or complex objects, so maybe that's why I'm a bit unsure of what there is to manage).

The long answer is, i guess, a combination of several features and conventions:

- A number of internal functions offer two versions: a destructive and a non-destructive version. (remove) and (delete) are examples of two functions that do the same thing in the abstract, but one is allowed to mutate it's target list, primarily for efficiency reasons.

- Regardless of the above, you tend to program in a universal style of accepting and doing things with return values from functions: so you explicitly "manage state" in the sense that you might assign return-values to variables, but it would be extremely weird to actually rely on a destructive side-effect of a function itself without treating it's result like a returned value of a function. To use a function for it's destructive side-effects isn't really done, and even if it IS, you'd treat it in a style that would be about treating it AS THOUGH it were just a returned value from a function. Almost universally you only use the destructive version for some computational efficiency reason, never because you expect to predict the outcome of what would happen to the underlying memory that you're mutating. That's left up to the compiler to give you the correct answer (and it may or may not mutate the underlying thing at all). You just accept that the function gives you the correct result.

- The various name-spaces tend to separate out clashes and management on that front: values go in one name-space, functions/macros go in another, and packages and things are another demarcation (i recall there's technically more name-spaces, but those are the main ones).

- There's no official "global variables" in the "normal" sense of the word. There are dynamic and static variables, which might be analogously thought of as global variables/values, but those tend to only be used for their specific use cases, and convention is to mark them as special exceptions. Variables with values in them are explicitly declared in functions, and you're responsible for choosing to manage state within functions, but due to the abstraction of functional boundaries, you'd generally never think of what mutations are actually going on inside a function unless you're authoring it.

- garbage collection and things dropping out of scope generally just take care of most things in practice.

- If you REALLY got into some sort of nitty-gritty bit-twiddling state nigthmare, you'd fall back on macros/testing/formal proof to help you. But most people are not doing that.

- Multi-threading isn't part of the spec IIRC, so that depends on the compiler extension and the problem. Race-conditions and ambiguities in that respect aren't really handled by the common lisp spec.

- These days with the advent of test driven development, there are testing frameworks that can be used for complex programs, but I'm hesitant to say anything in that regard as CL was really written before that became a cultural thing.

Of course, with all of these, there's the assumption that you can generally write in some functional/non-destructive style if you so choose. So if that solves the problem or makes your problem simpler, that's generally what you'd do. And if it doesn't, then it's not apparent that functional-ness of your problem is really the main problem at hand...

I hope that helps, feel free to ask me a few more questons if it helps, but also take note that it's been a few years since I've done anything with Common Lisp :\

Edit: I should say, if you're working with things like "inifite structures" or "delayed evaluation", you'd generally have to manage that explicitly yourself somehow, as CL is generally eager in it's evaluation style, but that's not most problems in my experience.


👤 Jtsummers
> How come Common Lisp doesn't get bogged down in a quagmire of complexity like managing state in other programming languages end up being?

This notion that you either have pure functional programming or a quagmire is grossly exaggerated. Pure functional programming reduces the things that you can do (in a sense), so that you never have to deal with some complexity (but you have to deal with other complications instead). In a typical procedural/imperative language (which is pretty much everything outside declarative or pure functional languages), you avoid a quagmire by practicing sane programming.

This is the entire point behind the idea of loose coupling and high cohesion, as well as modular programming in general. If your components are loosely coupled (and have high cohesion) you narrow the scope of each component (whatever that may mean in your language, "module", "class", "package"). This lets you focus on the essentials of that area and avoid the "quagmire". The real quagmires tend to crop up when people:

1. Make excessive use of mutation in separate areas (where it is hard, from a reading comprehension perspective, to recognize that this is happening, too). Like in a project I was on in my first job (embedded system). They had a "task" system and variables were reused across tasks, this made it hard to know how a variable was changed especially as some tasks would only conditionally change the variable. It was not meant to communicate between tasks (like a signal to a later task, "Hey, there's something new for you to look at" and the reply "Hey, I looked at it"), but was instead being shared by complicated distributed logic. The simplification was to move the mutation into one task and change the logic so it was more comprehensible. Each task became loosely coupled in a way they previously weren't. This also increased the cohesion (there was no rational reason for some of the tasks to be changing it at all, the logic was just dumped in them because they had cycles to spare in their 100ms slot).

2. Reach into other components and change their state directly. Related to the above, but somewhat saner (still a bad idea). A component should control its own state. There are optimizations to be made to avoid copies and excessive function calls (more important in the embedded world, not something I fret about on desktops, and usually resolvable with compiler optimizations like function inlining), but at least as a first pass direct changes from outside should be minimized if not eliminated. The principle effect of this is related to DRY. If you allow direct access to the data of a component (particularly changing that data or state, reading is, or can be, different), then the knowledge of the invariants related to it must be distributed throughout the system. This knowledge easily becomes out of sync as the system evolves over time. Working on a system that controls a physical device right now, you would not want arbitrary parts of the system to be able to direct the motor to rotate X degrees in some direction. That's a great way to break it. Instead one part of the system owns the motor, and contains all the sanity checks (based on present position, 90º clockwise rotation is impossible, only rotate 30º or signal an error or whatever is appropriate).

And that certainly happens a lot, but there's no requirement for it when you're writing a program. Spend some time thinking, be sane, and you can avoid the "quagmires".