HACKER Q&A
📣 _448

How to be fluent in functional language speak?


When I read academic articles or blog posts written by functional programmers it feels alien to me because of the terms used in their writing. I am not just talking about monads, monoids etc, but the general programming terminology itself is different to what I regularly hear and read as imperative OO programmer. For e.g. this sentence "abstraction over type constructors"; whenever programmers from C++, Java etc world will read that they are sure to say, "eh?!?" and fall off their chair.

So how does a programmer from non-functional world become fluent in understanding sentence such as "abstraction over type constructors"?


  👤 centimeter Accepted Answer ✓
"abstraction over type constructors" isn't very meaningful without more context anyway, but they're probably referring to something like the following:

non-abstracted:

    map :: (a -> b) -> List a -> List b
abstracted:

    map :: Functor f => (a -> b) -> f a -> f b
"List" is a type constructor (it takes a type and returns a type), but we can write a "map" function that works over more types than just List - for example, Maps and Options and so on (aka "Functors" in Haskell parlance). We've "abstracted (the function map) over the type constructor (of the data structure being mapped over)".

To answer your more general question, if you just learn functional programming on a language with a powerful type system (Haskell is probably the most germane example) you will learn at least some of these terms. Most of them turn out not to be very deep, just hard to explain (for reasons unclear to those who understand them), but still very useful.


👤 mywittyname
Practice programming in a functional language. They enforce a lot of restrictions that you won't be used to, and in overcoming these restrictions, you'll begin to learn and understand the reason for the jargon.

It's really hard to explain "abstraction over type constructors" in a pithy manner despite being a relatively simple statement because the obvious follow-up is "but why?" And the answer to that question isn't going to satisfy you without an understanding of the functional programming environment.

It's exactly like when novice programmers get thrown into Java land in CS101 and you have to tell the to just ignore the whole class, public, private, static, etc jargon. They need to gain experience programming in that environment to really appreciate those concepts.


👤 mamcx

👤 axelerator
I think the first step here is to put your expectations into perspective. Functional programming also is a range of complexity. If you're not used to functional programming and try to read articles targeted at the high-end you'll naturally struggle to understand them.

I still remember "not getting" OO in university in my first lecture. I had programmed BASIC, C, Pascal, PHP before. And if someone would have tried to explain me the idea by means of the visitor pattern, it would surely have made me say "eh?!?" too. So allow yourself to start slowly.

I'm also still dabbling in functional programming - at least that's what it feels like reading about monoids etc - but by actually using functional programming languages in my more serious spare time projects I feel I get a better and better understanding. And "those" articles start to speak more to me.


👤 marceloabsousa
You're not alone. Even functional programmers struggle to read those papers. I found that in the functional programming world there's a lot of concepts being thrown around with different names.

I think your best option is to directly contact authors or very good educators and ask them exactly what they mean.


👤 yodsanklai
First, you probably don't need to read academic articles to learn about functional programming. There's really nothing magical or complicated with functional programming. For instance, Scheme or OCaml are routinely used as beginner programming languages in schools around the world.

You can start with the basic concepts and eventually build your way up to more abstract constructs when you realize you need them. But even there, you don't need to know the academic terminology (or if you do, it'll make more sense when you get familiar with the language).

I frequently use monads and functors, yet I don't know much about category theory (and the little I know doesn't make me a better programmer).


👤 qppo
It's not the jargon of functional programming, it's the jargon of type theory and PL design. So maybe start there?

👤 WorldMaker
As with so many things in life, a lot of it boils down to practice.

Are you just trying to read academic articles and blog posts? Have you tried working in one or more of the languages? Beyond "toy problems" to trying to solve larger problems?

The more you work in a language, the more you might start to understand the needs that underlie things like category theory (the mathematics of monoids, semigroups, monads, etc) and where other higher level abstractions start to play in how you want/need to write software.

There's also places to practice even in C++, Java, etc. A lot of functional programming is making its way "stealth" into even the most imperative OO languages. Are you handy with your languages map/filter/reduce libraries around iterables? Have you tried pushing that deeper towards other monads such as query transformation (Linq) or time (async/await, ReactiveX)? Have you thought about how those work under the hood? (Explored what a iterable generator or an async/await decompile to?) Have you started yet to think in terms of high level "lambdas", places where functions themselves can be reused by other objects/functions? (Beyond "callbacks" towards thinking of functions themselves as lego bricks to build with.)

There's also middle ground languages such as F# or Clojure that you can use side-by-side (in some circumstances) C# or Java (respectively) code where maybe you can't work 100% in a functional language on a project, but you can get a compromise mix.

So much of understanding the "lingo" is recognizing the patterns and/or problems to which they refer, and a lot of the easiest way you will pick up patterns and see recurring patterns is practice.


👤 k00b
I got comfortable with Lisp using http://www.4clojure.com/

Learning through games like this is a big help because they provide: 1. clear and approachable goals 2. clear feedback on whether you've achieved the goals

Maybe Clojure isn't what you're looking for but maybe there's a similar service for a language you'd like to learn.



👤 pera
I don't think this is different in OOP literature, where you may read something like "abstract factory pattern", you just need to get familiar with the terminology. I would start reading Pierce's Types and Programming Languages and then any book on Haskell (you may also want to reinforce your abstract algebra :)

👤 dominotw
I never use those terms and main programming goto is clojure.

"abstraction over type constructors" might have to do with typing and not related to functional programming.


👤 quickthrower2
I found it equally tough because while there are a lot of resources to cover the basics and get you coding in Haskell there isn’t much to bridge the middle ground from there to the level the deep enthusiasts and experts are at. All I can say is keep trying and reach our for help on IRC channels or at a local functional programming meetup (and by local these days you could remote in to any meetup you like!). They’ll be people happy to help.

It’s been a while since I’ve dabbled with Haskell but I believe type constructors take a type and return a type (like List takes Int and returns List Int) so an abstraction over them might be an m where say m is a monad and it takes an a and gives you m a. m is an abstraction because it could be anything. A List, or Either, or State etc.


👤 moosey
Collect the definitions that you are learning over time. Put them in Anki as multidirection cloze deletions. Then, just do your Anki deck each day.

You will find that your fluency in almost any subject is improved with this practice.


👤 Ididntdothis
Question to all the functional programming experts: When OOP got introduced people started building all these elaborate object hierarchies and design patterns only to find out after a while that simpler is better and most features should be used rarely or never.

Is there also a risk with FP too that people fall a little too much in love with “elegant” code only to find out later that maybe simpler is better? Reading some people writing about FP I feel there is some risk of people doing this.


👤 cjbprime
Try a textbook? e.g. Types and Programming Languages by Benjamin C. Pierce.

👤 pacala
Most of the jargon denotes straightforward definitions. Pick the top-most frequent three terms, e.g. monad, monoid, applicative, write down the definition + a small example on a card, lather rinse repeat a few times and you're golden.

Edit. For example, let's take 'catamorphism'. Read the Wikipedia entry [0] and it's a seemingly never ending jungle of more and more abstract terms. Homomorphism, initial algebra, F-algebra, endomporphism. This easy, let's see what an F-algebra is [1] and we'll be on our way soon. "If C is a category, and F: C → C is an endofunctor of C, then an F-algebra is a tuple (A, α), where A is an object of C and α is a C-morphism F(A) → A.". Oh my God, this is hopeless.

Or, find a good description, with examples [2]. A 'catamorphism' card, abbreviated from [2]:

Catamorphism

A function that “collapses” a recursive type into a new value based on its structure. Visitor pattern.

Define a sum type Gift:

    type Book = {title: string; price: decimal}
    
    type Gift =
        | Book of Book
        | Boxed of Gift
Define a 'description' function over Gift:

    let rec description gift =
        match gift with 
        | Book book -> 
            sprintf "'%s'" book.title 
        | Boxed innerGift -> 
            sprintf "%s in a box" (description innerGift) 
    
Parametrize the function, creating a 'catamorphism':

    let rec cataGift fBook fBox gift =
        match gift with 
        | Book book -> 
            fBook book
        | Boxed innerGift -> 
            let innerGiftResult = cataGift fBook fBox innerGift
            fBox innerGiftResult

Refactor your 'description' function to use a 'catamorphism':

    let descriptionUsingCata gift =
        let fBook (book:Book) = 
            sprintf "'%s'" book.title 
        let fBox innerText = 
            sprintf "%s in a box" innerText
        // call the catamorphism
        cataGift fBook fBox gift

[0] https://en.wikipedia.org/wiki/Catamorphism

[1] https://en.wikipedia.org/wiki/F-algebra

[2] https://fsharpforfunandprofit.com/posts/recursive-types-and-...


👤 danidiaz
In Java terms, a "type constructor" would be a generic class like, say, `ArrayList`.

A constructor can be viewed as a function that takes values for the fields of a datatype and gives a value of the datatype in return. Not all functions are constructors, but some are.

At the type level, if we squint a little, "Maybe" or "List" behave a bit like constructors. They take types like "Int" or "Bool" as parameters, and give types like "Maybe Int" and "List Bool" in return. Not all type-level "functions" are type constructors (for example, in Haskell type families are not type constructors) but some are.


👤 Durgasoft
I also dislike unnecessary use of programming jargon. A type constructor is all the possible ways to make a value of that type.

For example:

  type answer = | Yes | No | Maybe 
The type is 'answer'. The constructors are Yes, No, and Maybe (these are actually called type variants but now we're getting into jargon). I presume an 'abstraction over type constructors' means you want to generalize that type, so you can use the constructors to produce an abstract type that works with any type but I've never heard that specific term.

👤 whateveracct
Keep trying to understand, asking questions, trying stuff first hand and eventually it'll come! The "Hask anything" sticky in /r/haskell is a friendly & pseudonymous way to ask questions too.

One thing I've noticed working in Haskell: Everyone is an expert in different things. Very rarely do I meet a Haskeller whose knowledge or interests subsumes another. That's part of the fun. Everybody doesn't know some terms!


👤 onemoresoop
I got comfortable with Lisp/Scheme using DrRacket. It is actually very helpful for beginners. I think you will benefit by learning functional programming this way, scheme is very simple and effective.

https://racket-lang.org/


👤 fpglossary
Here's a reference you may find useful

John A De Goes - A Glossary of Functional Programming

http://degoes.net/articles/fp-glossary


👤 milani
I had the same problem and decided to begin with fundamentals. I read "types and programming languages" by Benjamin C. Pierce and then things started to make sense.

👤 andrepd
I guess the answer is simply: just study funcional programming. 30 minutes into any basic haskell course (e.g. the wikibook, or real world haskell) and you would know what a type constructor is. I don't know, you can't expect to magically know things without studying them first? :) What you mean when you say "functional programming speak" is just common technical terms with a specific technical meaning. You wouldn't complain you don't understand the meaning of "Minkowsky metric" before learning relativity :p

👤 sova
Learn Map, Filter, Reduce. Build a toy project with a functional language.

👤 benibela
I took a Haskell class in college for that

It did not help much.


👤 diegoperini
> So how does a programmer from non-functional world become fluent in understanding sentence such as "abstraction over type constructors"?

We already have very primitive (and restrictive) imperative OO languages which are good at teaching the basics to new programmers. My observation is, FP lacks such gateway languages. Even the simplest FP-first language is intimidating.

So my answer to your question is, a good strategy to learn these can be carving out a relatively small portion from a language (i.e Haskell with only Algebraic Types and Pattern Matching) and working only on it for a period of time. Doing that doesn't teach terminology but exposes what Haskell is really obsessed about. For the curious, it is expressing type relationships as flexibly as numeric arithmetic we know from high-school.

Just like high-school arithmetic, several recurring patterns (often called formulas in Math classes) have their special names. Almost all programming languages are good at closely matching that syntax in their numeric expressions. It is unfortunately not the case for types though. In programming,

this functional type (pseudo code):

  A = (X + Y) | Z
tend to be expressed as:

  interface class A {
  
  }
  
  
  class A1 implements A {
    X x;
    Y y;
  }

  class A2 implements B {
    Z z;
  }
or as:

  union A {
    struct A1 {
      X x;
      Y y;
    } a1;

    struct A1 {
      Z;
    } a2;
  }
In both examples, it is longer to write the same "meaning" in OOP than FP, especially when there are abstract methods involved.

This difference in verbosity (aesthetically subjective) bugs an FP programmer more and more as they get used to their language. When that happens, people start naming things. Having an abstract type of `Collection` is no longer enough because we realize `Collection`, `Mappable`, `Traversable`, `Generator`, `Optional`, `Atomic`, `Pair` and `Tuple` all share the similarity of:

* wrapping a type,

* having a way to access the wrapped value(s),

* and the wish to support all operations the wrapped value supports without an unwrap, composable out of the box.

We call this pattern a name, the infamous "Functor" (incorrectly but who cares), and axiomize those assumptions formally in the target language, so that it is longer reimplemented again and again. The word "Functor" itself otherwise has no meaning, it is letter salad.

To learn those names effectively, you must be comfortable enough with the basics so that you get annoyed by the repetition and start researching new names as you stumble upon with new patterns.

In Haskell, "abstraction over type constructors" usually means (in OO terms) "declare a new generic interface and implement that in the abstract type so that you don't have to reimplement it non-genericly in the concrete type".


👤 Twisol
> So how does a programmer from non-functional world become fluent in understanding sentence such as "abstraction over type constructors"?

Incoming essay...

Pure functional programming is fundamentally about software components called "pure functions", or just "functions" for short. (I'm quoting the term because they're not the same as what are called functions in other languages. I'd call those other things "procedures" instead, but alas.) Other programming communities rally around components like "objects", and that's fine. The important thing is to pick something to break your system down into, and here, we're talking about "functions".

"Functions" are simpler kinds of components than objects or procedures: the ways in which they behave are much more limited, so they're easier to reason about. When a "function" is interacted with, it can't perform any side effects, so interacting with the "function" multiple times gives the same behavior every time. Procedures can manipulate global state, and objects can manipulate internal state, so reasoning about them takes more effort. You have to keep track of time: what happened before this interaction?

"Functions" have two interfaces: an input side and an output side. When a "function" receives a value on the input side, it will always emit a value on the output side. We can wire "functions" together by connecting the input of one to the output of another. The result is a system with a single free input side and a single free output side -- that is, it's also a "function". This makes it very easy to wire up lots of "functions".

Usually, a "function" has certain expectations of the input values that it receives, as well as some guarantees about the output values it emits. In a dynamically typed language, when those expectations are not met, a runtime error is emitted. This is fine -- it's just one way to handle failed expectations. But sometimes we can formalize those expectations statically, and let the compiler check up-front whether those expectations are met. This does add some complexity: you might not be able to wire up two "functions" if one can't satisfy the other's expectations. Statically-typed programmers have decided that they're willing to put up with that.

In a statically-typed world, the input and output interfaces of our "function" software components can be tagged with specifications. Functional languages are often judged by how expressive these specifications can be; a specification language like this is called a "type system". You can have type systems for components that are not "functions", but the rules of composition become more complex. (For "objects", class systems are quite popular.)

Most type systems allow you to break down an interface specification into smaller pieces, which themselves are valid specifications. That means we now have two kinds of components in our system: "functions", which exist at runtime, and types, which exist at compilation time. The rules for how types compose can be more complicated than those for "functions", but we're usually okay with that, as long as "functions" are kept simple.

Some type systems allow a value to satisfy multiple types. For example, class-based systems allow this via subtyping relations. Others might describe types as predicates (truth functions) on a pre-existing universe of values (e.g. TypeScript). Pure functional programming typically requires mutual exclusion: a value cannot be part of multiple types. We often say the the type defines its values because of this.

The low bar for a static type system is "algebraic types". This means that we have two ways to compose types, conventionally called "product" and "sum". Most type systems have products, but sums are historically rarer. Values of a product type `X * Y` look like `(x, y)`, and we can pull out either element of the pair. Values of a sum type `X + Y` look like either `Left x` or `Right y`, and we can ask which one it is and do something different depending on the result.

Java and C++ do not have sum types. You can approximate their specifications to some extent using the Visitor pattern or discriminated unions, but the implementations in these languages are verbose, and unpleasant to read and write. Rust and Haskell do have sum types, so specifications of that nature are used much more readily.

We can generalize further. We can think of the product and sum of types as "type functions", or "type constructors", that take a pair of types as input and produce a type as output. Many type systems allow you to define your own "type functions". In Java, these are generic classes: "List" is a type function from some type T to the type of lists containing that type. In C++, these are templates. But in most pure functional programming, static type functions look just like runtime value functions, since they behave just like them too. It's just a matter of when they're evaluated.

Once again, when you have functions, you often want to place demands on their inputs, and guarantees on their outputs. Type functions are no different. There are many possible ways to "type types", as it were, but a popular approach is by using "traits" or "typeclasses". Just as with types and values, a typeclass places requirements upon the types that inhabit it. (But a type can also satisfy multiple typeclasses.)

For instance, "Monoid" is a typeclass on types with two associated items: a "zero" value of that type, and a "concat" function that takes two values of the type and gives another value of that type. If you know that a type is a Monoid, then you know that these associated items are also available to you, regardless of what the actual type is. (For instance, you can write a "concatAll" function that iterates over a list of elements of a monoid and concatenates them together regardless of what the monoid type actually is.)

With typeclasses in mind, instead of taking a "list of X" or a "set of X", you might abstract over all such things as "collections of X". But you don't want to say what a collection is for every possible X. We need to abstract over type constructors. A "collection" is any type such that, given a type X, you get a type with certain associated items expected of all collections. Java approximates this with subtyping.

"Functor" is, in some ways, a generalization of "Collection". A type function that is a "Functor" is a type that, if you have a value of that type over Xs, and you have a function transforming Xs into Ys, then there must be a mechanism to get a value of that type over Ys instead. We call this mechanism "map", and it can be instantiated over any collection. Got a list of X and a function from X to Y? We've got a way to get a list of Y. Got a set of X and a function from X to Y? We've got a way to get a set of Y.

Because "set" and "list" are type constructors, we say that we are abstracting over type constructors to get a single statement about all such things. "For every Functor F, if I have an F containing Xs and a function transforming Xs to Ys, then I can get an F containing Ys."


👤 rb808
Give up while you can. Immutable data classes and functional pipelines are great, everything else isn't worth it. Lots of people say FP is amazing but very few people use it in the real world because like you say, its unintelligible.