Excerpt from the Programming Haskell From First Principles book cover, showing just the title. There is an overlay saying 'Chapter 5' across the top right corner.

This image really grows on you, you know?

Oh, hey, it’s you again! Always a pleasure to have you along for the ride. This time, let’s have a deeper look at one of the basic underpinnings of the Haskell language: types. Being an implementation of a typed lambda calculus, types are a big part of what makes Haskell Haskell, and a good understanding of them is essential to master the language.

The chapter opens with a couple of pages asking what types are good for and describing how it can help us enforce invariants, avoid (certain types of) bugs, enable compiler optimizations, and reason about our systems.

For this summary, we’ll explore how to constrain argument types to typeclasses, have another dive into currying and partial application, and have another look at polymorphism.

Constraining your types

Let’s start off by looking at how we can constrain a function’s domain by using typeclasses and how that impacts the function.

By default, when you create a function and you don’t annotate it, the compiler will give it as wide a definition as it can, using the least specific types, constraining it to a typeclass if necessary, but leaving it completely open if possible. Of course, if one or more of the parameters can only be of a concrete type, it will infer that too.

This generalization is why we can define a function for numbers and then use that function for any type that has an instance of Num.

To add a typeclass constraint to a type signature, specify the typeclasses in before the arguments and separate them with a fat arrow (=>). If you have multiple constraints, put them in parentheses and separate them with commas. The below signatures illustrate this:

The type constraints only show up at the type level and are never visible at the term level.

As we’ll see when we talk about polymorphism a bit later, constraining our types like this enables us to create more powerful and specialized functions at the cost of decreasing general applicability.

Currying

We’ve talked about currying before, but this time it’s given an entire subchapter! If you can’t quite remember what it is, let me give you a quick summary:

Because any function in a lambda calculus can only be applied to one argument, the way to construct functions that can be applied to multiple arguments is to nest multiple lambdas. Haskell abstracts this away for us and makes it seems like a function can be applied to an arbitrary number of arguments. The only place where this actually shines through is in function signatures, where the function operator ((->)) separates the parameters, hinting that it’s actually just a bunch of nested lambdas

This is handy because it allows partial application, allowing us to pass around functions that haven’t been fully applied and add the remaining arguments when it suits us.


Haskell is curried by default, but it’s possible to uncurry functions. What does that mean? It means that if you have a function that you need to apply to two arguments, you can turn it into a function that only accepts a single argument: a tuple containing the two expected arguments. The opposite is also true: if you have a function that expects a tuple, you can curry it to turn it into a function expecting two separate arguments.

The type and possible implementation of uncurry is as follows:

We can play around with it a bit:

Can you see how you can implement fst and const in terms of each other composed with either curry or uncurry? Look up the types in the REPL and play around with it a bit if not.

In short:

  • Curried functions: many nested functions, each taking one argument each
  • Uncurried functions: one function, many arguments (grouped in a tuple)

Sectioning makes a little return in this chapter too. Quick reminder: sectioning is partial applications of infix operators or functions, where we can leave out the term on either side and Haskell will cleverly know that we are missing the value to fill it up: (2^) 3 would give 8, for instance, while (^2) 8 would give 64. Sectioning works for any binary, infix function.

Polymorphism

Polymorphism was also mentioned last time, but this time we dive a bit deeper into what it is. Our definitions of parametric and constrained polymorphism return, and we’re introduced to something called ad-hoc polymorphism, so let’s recap and explore a bit.

Parametric polymorphism is the broadest form of polymorphism, and is when the type variables are fully polymorphic, completely free of any and all constraints. This means that their actual concrete implementation could by anything and the function would still do what it should. The function will work on any type. An example of a parametrically polymorphic function is id:

Can you tell what it does? When a function is this polymorphic, there is only one thing it can do, and that is return whatever it receives as an argument. Hmm? Useless, you say? Well, it has its uses, but this is not the time or place to explore them.

Constrained polymorphism is when there are certain constraints placed upon the types by specifying that they must belong to a typeclass. This makes the function able to use functions defined for a given typeclass, increasing the number of things the function can do by simultaneously decreasing the number of values it can accept. Any numeric function is a good example of this:

Constraining your types will give you more power within the function, but a smaller domain of input values. A good rule of thumb would be to always try and make your functions as general and unconstrained as possible to give the caller maximum freedom, but not to be afraid of putting constraints on the inputs when it is required by the task at hand.


Sometimes you’ll come across something known as a polymorphic constant. In fact, we’ve seen a lot of these already: any time you only specify a literal numeric value without specifying the type, it will be polymorphic until it’s given a more specific type such as by function application.


Haskell’s type inference is built on extended version of the Damas-Hindley-Milner type system. It will infer the most generally applicable (polymorphic) type that is still correct for your code. But while declaring types might not be necessary, it’s still considered good practice and is recommended.

Definitions

Ad-hoc polymorphism
Polymorphism that applies one or more typeclass constraints to a polymorphic type variable.
Higher order functions
these are mentioned in the chapter, but not expanded upon. A higher order function is a function that either accepts as argument or returns a function when evaluated. map is probably the poster child for having a function from a to b as its first parameter.
Parametricity
Parametricity states that the behavior of a function will be uniform across all concrete applications of the function.
Polymorphism
In Haskell: type variables that may refer to more than one concrete type. Usually manifested as ad-hoc or parametric polymorphism.
Type inference
The ability of some programming languages to infer types from terms (your code) without explicit type annotations.
Type variable
A way to refer to an unspecified type or set of types in type signatures.
Typeclass
A collection of functions that must be implemented for any type wishing to create an instance of said typeclass. Allows for ad-hoc polymorphism and abstractions.

Next time

Next up is the chapter on typeclasses, where we’ll finally get an explanation for how all these numeric types work together and look at some basic, easily derivable typeclasses. Stay tuned!