But I wonder if this is mostly a matter of taste. In small programs, the end result of a Haskell program is the same as a Python program. Is there a threshold after which Haskell's purely functional paradigm shines the most?
In F#, an experience report came out where 350k lines of C# were rewritten in 30K of F# code. They also went from 3k null checks to 15 lines of null checks (Plus much more). Zero bugs were reported in the newly deployed system.[0]
Now with that said, there are exceptions where purely functional programming languages shines less:
- Places where the ecosystem is not quite as mature. If you're building a server and have to interact with Cloud services in Haskell, you'll have a bad time.
- Any kind of system where you need to do manual memory management, so systems programming, is badly suited for purely functional programming.
Mixed initiative systems (say UI code where you call into the framework and the framework calls into you) can go either way. Sometimes you can formulate the part of your code in a pure functional way which tames the chaos, if you can’t or that formulation is unnatural the chaos tames you.
System w/ immutable data structures lose a factor of two or so in performance in some cases. FORTRAN programmers won’t accept them, distributed “big data” systems will spend a lot of time marshaling and unmarshaling data and won’t accept any slowdown in their parsing code. The same could be true for something like that SAT solver.
As an active user and package maintainer I can't count the number of times I've decided to rebuild an old project that I haven't touched in years and it still working, while in the same session working on another project using up to date libraries.
There's also straight.el for Emacs[1] which has finally made Emacs config maintenance far better for me.
- You can safely make more assumptions about how your program works - You can rely on your type checker like a to-do list when extending your program
These two benefits require more discipline, but they make extension and support brain-dead easy.
For the record, you don't need Haskell to enjoy these benefits, you can get them in Python or TypeScript too, as long as you're disciplined in how you design your system.
So, I would say the advantage of a pure functional programming language is that it enforces and encourages writing pure code. You can still write in the style to some degree in python, but you don’t get all of the idioms language support to make functional programming really nice.
I notice a difference in my code when I write in a pure language.
There are some cases where you really want in place mutation— graph algorithms are good example—but for most other things, I feel like functional programming wins because of how you can think about your code, and less about code actually can or cannot do.
According to Grokking Simplicity[1], good functional programming isn't about avoiding impure functions; instead extra care is given to them. These functions depend on the time they are called, so they are the most difficult to get right.
Compare to pure functions: given the same input, the results are always the same regardless of when the function was called. So they are easier to reason with.
There is actually a level that is even easier to reason about than pure functions: plain data.
So functional programmers prefer data over pure functions over impure functions.
The reason to avoid "leaks" is because impure functions cause everything that calls them to also become impure. However, it is OK to use local mutation contained to the function itself. Sometimes mutation is simpler and doesn't affect the purity of the function that contains it.
Another good book on practical functional programming is Domain Modeling Made Functional[2]. Actually all of Scott Wlaschin's material[3] is very good!
[1]: https://www.manning.com/books/grokking-simplicity
[2]: https://pragprog.com/titles/swdddf/domain-modeling-made-func...
https://kristiandupont.medium.com/mutable-state-is-to-softwa...
I find personally that there are some areas where FP shines -- anything that is or could be a CLI that takes some input and returns some other output is well suited. A GUI program is the manipulation of state, and it is poorly suited, though many sub parts might not be.
What I don't like is the concentration level required for functional productions over equivalent imperative routines. I have found it difficult to impress upon junior developers the testing advantages of functional programming and in refactoring pairing sessions with them, they quickly lose the thread and go all doe eyed.
I feel like the current functional programming paradigms favor terseness that completely obliterates the thought process that led to the production. So, even if junior has the aha moment when we reach the end of the production, and understands what we just achieved there in 6 lines of dense code, they'd be at a loss to retrace it themselves in the future because the thought processes of each operator in the production require too much simulation space in the head.
That, and libraries like RxJS that layer in concepts of time and observables and higher-order observables with sneaky completion states that only stretch the mind further because the true semantics of the program are coupled with under-documented quirky edge cases of the operators. Running into one of those while pairing with junior is not exactly confidence building.
Long-form programming with named pure functions might help, but then I suppose you can lose the terseness and can get lost in a sea of 1-line named functions.
It does, IMO, give you less foot guns related to state though. While I still actually like other paradigms, functional programming really limits how clever you can be with state. You can only pass things around immutability via parameters (At least with Haskell specifically) instead of making a confusing taxonomy of objects with unique APIs, for example.
Related concepts that are relevant at a high level is the comparison between declarative code and imperative. Code that is functional tends to be declarative and in a way "is what it seems to be." There is also a transitive property that allows the makeup of a thing to also be its own runnable representation which makes portability or idempotent things easier to achieve.
I dare say that original "OO" in terms of "message passing" is functional in nature, but object-oriented somehow became something it is not.
Erlang is probably the best representation of a functional language ideal in my own personal opinion when weighing in the ecosystem and capabilities of the language and tooling.
Beware of the Turing tar-pit
in which everything is possible
but nothing of interest is easy.
-- Perlis:54
https://web.archive.org/web/20120806032011/http://pu.inf.uni...Haskell is syntactic sugar on top of the Von Neumann architecture (likewise Python, Rust, COBOL, Smalltalk, etc.)
The Von Neumann architecture is inherently stateful. Anything with volatile memory is.
Functional programming is an idiom, not a technology. To the degree functional programming is a technology, it is one for analog computers.
Good luck. My time has been wasted trying to be clever.
This has benefits when working in large, long-lived systems where engineers can't keep everything in their head, and people come and go over the years. It is easier to reason about, to refactor, and to test, when you know that the "blast radius" of your function is just the return value.
What we call functional programming today is often reducing the problem down to the easy parts. Which is great and should not be discounted. But don't think of it as any more valid of a solution than anything else. And pure numerical models of the problem can often be just as fruitful in solution speed, while being utterly foreign to most programmers.
OO gets lambasted as you try and model each individual thing in a problem. In my mind rightfully so. That said, it is very common for us to model things in a way that is very "object" based. Consider the classic model of an elevator system and how that is coded by most people. You will have a set of elevator objects with states that they currently represent.
But, you could also do this fully numerically with a polynomial representing things. Expand your toolset to use generating functions, and you can even start building equations that can count the number of solutions at any given state. Still just a symbolic model, but very very different from the OO or even FP style of model that most programmers will write.
Better for the problem you are solving? Probably not, during the exploration side of things. But get the problem into a formulation that you can translate into a SAT style, and you can feed it to SAT solvers that are far more capable than programs most any individual can write. Translate the solution back to your representation for display to the users. (Or just general use in your program, as you move to the next thing.)
Referential transparency means that when we bind an expression to a name (e.g. `y = f x`), the two are interchangeable, and whenever we use `y` we could just as well use `f x` and vice versa, and the meaning of the code will stay the same.
Enforcing referential transparency in a programming language means that:
- We need to have more control over effects (purity)
- We can use substitution and equational reasoning
The value of referential transparency for me is that I can trust code to not surprise me with unexpected effects, I can use substitution to understand what a piece of code is doing, and I can always refactor by converting a piece of code to a new value or function because referential transparency guarantees that they are equivalent.
Because of that I feel like I can always understand how something works because I have a simple process to figure things out that doesn't require me keeping a lot of context in my head. I never feel like something is "magic" that I cannot understand. And I can easily change code to understand it better without changing its meaning.
Referential transparency is freeing, and makes me worry less when working with code!
---
The other part that is very notable about Haskell is one of its approaches to concurrency - Software Transactional Memory. Which is enabled by limiting the kind of effects we can perform in a transaction block:
https://www.oreilly.com/library/view/parallel-and-concurrent...
For example, I've worked on a lot of Scala code which represented optional values using `Option[T]` (~= Haskell's `Maybe t`); represented possible failures using `Try[T]` (~= `Either Exception t`); and I even used the Cats library to make code polymorphic (i.e. for all `M: Monad`, or `M: MonadThrow`, or indeed `Applicative`, etc.). The older ScalaZ library is a popular alternative to Cats which does similar things. Concurrency is currently quite diverse in Scala: we mostly used `Future[T]`, but there are a bunch of alternatives out there like `Task`, etc. Likewise there are other Scala libraries/frameworks which model side-effects differently, e.g. Zio uses algebraic effects.
However, since Scala is not pure by default, we can't count on any of these fancy types to actually prevent the problems they're meant to address. For example, we can only use `Option` alongside `null`: the latter can't be avoided, since Scala isn't pure; in fact `Option` should technically introduce even more null checks (since `x: Int` might be `null`; but `y: Option[Int]` might be `null` or `Just(null)`!). Likewise, `Try` can only be introduced alongside the possibility of any exception being thrown at any time. And so on.
In contrast, we might have a project e.g. in Haskell, where `IO` appears in all of our types. Such code might use all sorts of confusing effects (e.g. early returns, mutable variables, concurrency, etc.), and may blow up in all sorts of ways, just like Scala, Python, etc. Yet at least they're labelled as such, and we're able to write other values and functions which actually do what they claim (modulo totality, unless we're using Idris, Agda, etc.).
Are you _literally_ writing a simple script to do a single task which you won't save into version control? Pick whatever style will be fastest to write + run. But if you'll need to re-read it in 6 months to figure out how to re-use a portion in a different context, or the next engineer will need to migrate something out from under it in a year, or whatever else, then picking a style based on readability, testability, and analyzability can be worth it -- provided the rest of your team is on the same page.
This property is true to varying degrees outside of pure FP too. For example in Rust I find that expression-orientation makes programs fairly easy to compose, modulo thinking about ownership.
In a complex world with many hidden aspects, unleashing unpleasant surprises all the time, working on purely functional components makes it easier to:
1. understand 2. test 3. debug 4. extend
These are significant benefits of the functional approach (at function/component level) in my pov.
I personally do a lot of complex math and so I'm using it for programs that are doing a lot of math. I still haven't got my chops enough to do a lot of procedural and external stuff with it yet, I find a higher level language is much easier for that stuff, but when doing math I find that it's much less easy to make mistakes that make it into your production code with Haskell.
If you choose to write your core product in a functional programming language, you now have to hire an entire team of people who can program FP. If you decide to write your core product in Javascript, Node, or Typescript, you can now hire bootcamp grads and a handful of senior engineers to keep an eye on them
https://mostly-adequate.gitbook.io/mostly-adequate-guide/
See discussion: https://news.ycombinator.com/item?id=22654135
There's a point of view that if you haven't done it correctly, you haven't done it at all. There's another point of view that you just fix bugs as they found.
For me, as a UI dev, there are a lot of repetitive tasks. FP allows me to know that the Lego pieces all work as they should.
I've been a professional programmer for a little more than twenty years and have been using Haskell to this degree for about the last 4 or so. A good chunk of my career was dedicated to Python and C. I've done work in C++ and Javascript. Dabbled in Common Lisp in my spare time. Which is to say, I've ran the gamut of functional, imperative, object-oriented, etc.
I got started in OCaml first by taking the Inria course. I later tried Haskell by taking the University of Glasgow course. I often thought of myself as preferring FP but I always worked in languages with escape hatches where I could retreat to familiar territory. It was easy to rationalize this as, "using the right tool for the job." However thanks to some smart people in my life I decided to dip my toes in languages that were FP-first (OCaml) and later FP-only (Haskell).
> Is there a threshold after which Haskell's purely functional paradigm shines the most?
I think you just have to jump in the pool and go for it. There's no threshold. You can write scripts, programs, whole systems in Haskell. There's no point where, "it becomes worth it."
I'm certain at this point that you can write a relational database engine that is comparable in performance and features as any other in the market in Haskell. You can write video games in it. You can write short helper scripts and tools. Word editors, websites, compilers, etc.
The threshold is how willing you are to leave behind what you're already familiar with and learn a new way of programming that will make you feel like a complete beginner again for a while. Are you willing to re-learn programming over again? I don't know why this is surprising to people but if you started with a C-like language and picked up other C-like languages along the way, picking up Haskell isn't going to be as easy: there's nothing familiar you can use in your C-like knowledge that will be useful for a long while.
I've been doing it for a while now and there are benefits to it which is why I still program in Haskell and got a job writing code in Haskell. You likely already have a good deal of the tools needed to reason about how to design programs in a pure FP language: sets and high-school level algebra. You have notions built into the type system in Haskell like equality, relations, and constraints. Once you start jogging your brain on how to think this way you can design most of your program in types. And when you do it this way, there tends to be only one or a couple of ways to implement a program that satisfies those types... it's almost a mechanical process of filling in the holes.
From a practical perspective of a former Python programmer, I don't have to write a good deal of unit tests I would usually start with when building a Python program. Armed with a well-thought out handful of types the compiler will reject any program that isn't correct by construction which allows me to focus more of my efforts on things that are more important to me: is the logic correct, etc. I'm not testing, "are these things equal," "does this throw an expected error," "is this actually a valid function," etc.
Another weird thing that happens when you learn to program in Haskell is that control flow is kind of... represented in data types. Since Haskell is expression-oriented and execution is carried out by evaluation we get neat features like whole program analysis of all code branches. If I add a new constructor to a sum type, I get instance feedback on all the places where I need to now handle that case. This is super useful.
However... it can be a challenge getting to this place where you're comfortable with it. I had a hard time when I started a small Haskell project early on to make a simple web app. I was so used to logging being a single import away that having to learn all these new concepts in Haskell like, "monad transformers," just to add logging to my app seemed like a bunch of busy work for something that was, "solved," in my mind. The key to getting through moments like that is to forget your past experiences and just open your mind to being a beginner again.
It becomes useful later on when you realize that composition is so incredibly useful. You start to miss it when you go back to procedural languages and there are no guarantees and nothing composes.
In other words, is your program a calculator or a robot?
just like cargo bikes are not