Excerpt from the Programming Haskell From First Principles book cover, showing just the title. There is an overlay saying 'Chapter 10' across the top right corner.

A whole chapter dedicated to folding lists? I thought it would be quick and easy, but it turns out there’s a lot of things to uncover in regards to folding. In this post we’ll be looking at what a fold is and how folding left differs from folding right.

A brief explanation of folding

Before we can talk about folding left or right, it’s important to know what we’re talking about. In short, a fold is a way to reduce a data structure (lists in our case) to some other type by working through the data structure. If that sounds unclear, a more concrete example could be a function that sums all the numbers in a list or a function which tells you whether any element of a list satisfies a predicate.

Let’s look at foldr, short for ‘fold right’. We’ll talk about foldl (fold left) in a bit. The signature for foldr is as follows as taken from GHCi, version 8.6.5:

We haven’t talked about kinds yet (that’s in the next chapter), but the t :: * -> * part means that t is a type that is parameterized by another type. In our instance, we can substitute it with a list instead:

So foldr is a higher-order function that relies on two parametrically polymorphic types, a and b. It takes a function from a and b to b, an initial b value, a list (or Foldable) of a, and returns a b.

This snippet is taken from the Haskell Wiki article on folding (which I recommend you take a look at if you’re not quite familiar with it), and shows the equations defining foldr:

In the above snippet f is the (a -> b -> b), z is the b, and the [a] is either destructured into [] or (x:xs).

Folds are catamorphisms, a way of deconstructing data, reducing a structure to a completely different result.

Why direction matters: foldr vs foldl

Building on the basic definition of a fold, let’s explore the differences between folding left and folding right and what impacts that has on your programs.

Let’s revisit the definition of foldr, but this time put foldl just below it. Again, the snippet is taken from the Haskell Wiki article on folding, but I’ve added the function types for clarity.

These two functions are nearly identical, but they flip the order of the parameters of the first function and they behave quite differently. Notice how foldr applies f to x and then to the result of folding the rest of the list, while foldl immediately recurses before applying f to z and x. This may seem unimportant, but it has big ramifications in a lazy language like Haskell.

With foldr, if you have a function that lazily evaluates its arguments, you could potentially short-circuit the recursion long before reaching the end of the list, but with foldl, because the function recurses immediately, you force the evaluation of the entire list. This means that if you have an infinite list, foldl will never finish, while foldr might.

Let’s see an example. If we want to use foldr to find whether any number in an infinite list is even, we could do it like this:

Even if the list is infinite, the function we pass to foldr doesn’t evaluate its second parameter (b) if even x evaluates to True. In the example above, the evaluation would work a little something like this:

In this example, the function can exit after only two steps through the list because our function (\x b -> even x || b) doesn’t evaluate b unless even x is False. If we look at the same example with foldl, however, the result is quite different:

Because foldl recurses unconditionally (foldl f (f z x) xs), the function will start to evaluate, but because it needs to get to the end of the list before it can evaluate the function and because the list is infinite, the function can never terminate.

foldr, on the other hand, doesn’t recurse immediately, and flips back and forth between evaluating f and taking another step. This means that, because of lazy evaluation, foldr can stop recursing through the list after any number iterations, depending on the function.

Let’s relate back to what we talked about last time and talk about spines. Folding happens in two stages: traversal and folding. Traversal is the stage in which the fold recurses over the spine. Folding is the evaluation of the folding function applied to the values. All folds recurse over the spine in the same direction. However, the difference lies in the association of the folding function and direction the folding proceeds. This sounds tricky, but we can demonstrate it by parenthesizing:

Notice how the order in which the expressions are evaluated is reversed, as is the end at which the zero element is applied. If we use a non-commutative operation instead of summing, we’ll see the last point more clearly:

Notice how the 0 ends up at opposite ends of the result because of the evaluation order.

Folding summary

In short, the book recommends to always use foldr over foldl and provides a helpful summary:

foldr
  1. The recursive invocation of foldr is an argument to the folding function. It doesn’t directly self-call as a tail call the way foldl does. In a way, you can think of foldr as alternating between applications of the folding function f and foldr, where the next invocation of foldr is dependent on f asking for it.
  2. Is right-associative.
  3. Works with infinite lists.
  4. Is a good default choice for transforming data structures.
foldl
  1. Self-calls (tail call) through the list, starts to produce values after reaching the very end.
  2. Is left-associative.
  3. Cannot be used with infinite lists.
  4. Is ‘nearly useless’ and should almost always be replaced with foldl'. We haven’t talked about foldl', but it works the same as foldl except it is strict, meaning it evaluates the values inside the cons-cells as it traverses the list, rather than leaving them as unevaluated expressions. We’ll talk more about this when we get to the chapter on efficient code.

Definitions

Fold
A higher-order function that takes an accumulating function and a recursive data structure and returns the value that results from applying the function to the elements in the structure.
Catamorphism
A generalization of folds to arbitrary data types. A morphism is a transformation, and cata means ‘down’ or ‘against’, so a catamorphism is a transformation to a ‘lower’ form.
Tail call

A tail call is the final result of a function, i.e. the last function that gets called in a function. In the below example—assuming g and h are functions—h is the tail call.

Tail recursion

a function whose tail calls are recursive invocations of itself. In our definitions of foldr and foldl, we see that foldl is tail recursive, while foldr is not, as the latter alternates between calling f and foldr.

Summary

In addition to going deep on folding, this chapter also touches briefly on the scan functions scanl and scanr. In the interest of time I won’t cover them here (unless we need them later), but if you’re interested you can go check out Hackage for more information.

Next chapter is on Algebraic Data Types, and it’s a big one, so stay tuned.


Excerpt from the Programming Haskell From First Principles book cover, showing just the title. There is an overlay saying 'Chapter 9' across the top right corner.

Ah, another data type deep dive! This time we’re looking at lists and how they are created and evaluated. Not much more of an introduction needed here; let’s just dive right in!

List creation

There’s a number of ways to construct lists in Haskell. This chapter covers using ranges and list comprehensions, both of which are quite neat and something I get deep personal satisfaction out of (hey, I’m not weird!).

We covered ranges when looking at the Enum typeclass in Chapter 6, so let’s just do a quick refresher: In Haskell you can create lists out of anything that implements the Enum typeclass (such as Int and Char and Bool) by using the syntactic range sugar:

Neat, huh? Just keep in mind that the range must be in increasing order. If you try something like [5 .. 1], you’ll get an empty list.

In addition to ranges, there’s also this thing called list comprehension. List comprehensions are something I normally associate with Python, but I think Haskell’s version is much cooler. The syntax looks a lot like set comprehension does in mathematics (which is also where the concept comes from). A list comprehension requires at least one list (called a generator) that provides an input for the comprehension. Here’s a simple comprehension:

The first bit before the pipe, x^2, is the expression that goes into every cell. The variable x is assigned from the generator [1 .. 5]. Indeed, what this list comprehension does is simply to square each number of the generator. The same result could be achieved by a simple map: map (^2) [1 .. 5].

Where list comprehensions really shine, however, is when you start to add more variables and optional filters. When adding more variables, the comprehension will create a list with every possible permutation of those values. For instance: say you want to create a list of all the positions in a 3x3 grid around an origin ((-1, -1) to (1, 1)). Using list comprehensions, that’s trivial:

In addition to variables, you can also add filters after the variables. Say you want the same list as above, but you don’t want to include the origin ((0, 0)):

And not just one filter; you can add multiple! What if we want a bigger list, but only want the tiles where the Manhattan distance to the origin is divisible by 7 (and excluding the origin itself)?

It may be a farfetched example, but I think it adequately shows just how cool list comprehensions can be in Haskell.

Spines and non-strict evaluation

To talk about list evaluation, we need to talk about how lists are represented. To talk about how lists are represented, we need to talk about spines. But let’s take a step back. It’s not all that scary.

The list data type is defined as such:

It’s a recursive data structure. It’s either an empty list, or an element prepended to another list. Syntactically, when we write lists or print them, they’re usually presented like this: [1, 2, 3], but can also be thought of as 1 : 2 : 3 : []—that is 1 before 2 before 3 before the empty list.

The : operator is known as cons. The list construction in the last paragraph can be thought of as a series of cons cells (a value consed onto a list). It could also be written like this, which may make the relation to the list data constructor clearer: 1 : (2 : (3 : [])).

One problem with this representation, though, is that it makes it seem like 1 exists ‘outside’ of the list, when it is actually contained by it (and by its cons cell). Because of how lists are evaluated in Haskell, we’ll see that because the value is contained, we can evaluate cons cells without evaluating their contents.

Let’s talk about spines. You see, the list above can also be represented—conceptually—like this:

In this representation, the cons operators line up to form what is known as the spine, which is a fancy word for the connective structure that makes up a data structure such as a list or a tree. In the case of lists, it’s just a series of cons operators.

Crucially, spines are evaluated independently of values. What does this mean? Well, functions that only require evaluating the spine and not the values, such as length, can walk down a list without evaluating the contents. So, returning to our list from before, if we were to apply the length function on it (and the values hadn’t already been evaluated), you could think of it like this, where the underscores represent values that haven’t been evaluated.

Another way to get this point across is to look at what happens when we add bottom values to a list. Run this in the REPL to try it out.

Taking the length of the list works just fine because we don’t need to evaluate the items in the list. Taking the first element is no problem because it’s a regular value and the rest of the list has not been evaluated. When we take two values, however, this forces the evaluation of the second item in the list, undefined, which causes the program to crash.


There is loads more to be said about list evaluation and the impacts it has on what we can and can’t do in code, but these are the most important things covered in this chapter.

Chapter definitions

Product type

A product type is a type that contains several other types at the same time. The canonical example is a tuple: You can have a tuple of a Bool and an Int, and for every instance you have of that type, you will have both a Bool and an Int. The reason its called a product type is that the number of possible permutations is the product of the number of possible values for each type.

Say you have a data type: data Direction = Up | Down | Left | Right. If you have a (Bool, Direction) tuple, you now have 8 (4 ⋅ 2) different possible permutations of that tuple.

Sum type

A sum type is a type that is only one of a set number of types at any time. Relating this back to the example above with Bool and Int: If you have a data type that is either a Bool or an Int (such as Either Bool Int), then that’s a sum type. The number of possible permutations is the sum of the number of possible values for each type.

Using the Direction type from above, Either Bool Direction has 6 (4 + 2) possible values.
Cons
The : operator. Used to prepend a value onto a list. Can also be used as a verb (e.g. consing a value onto a list).
Cons cell
A data constructor and a product of the types a and [a] as defined in the list data type definition.
Spine

A structure that strings a collection of values together, the scaffolding of a data structure. For the list datatype, it’s formed by the recursive nesting of cons cells.

Other topics

In addition to what we have covered here, the chapter also contains more information about pattern matching on lists, and some common list operations: map, filter, and zip. However, to keep things nice and brief, I’ve decided to leave them out for now. For more information on those topics, please consult your search engine of choice or nearest message board.


Goodbye 2019, Hello 2020

Wrap-up and new goals
The text

One year gone, another one about to begin. Bring it on.

It’s the end of the year and there’s round-ups and predictions everywhere, so I think it’s an appropriate time to take a step back and look at what I’ve achieved this year and what I have in mind for next year.

2019

Let’s move in a chronological fashion. In 2019, my list of goals was just a slightly-longer-than-usual Slack message to my friends. They were pretty vague, but still hold some value as a set of overarching goals.

Goals

Containers

Before 2019, I’d hardly ever touched a container. I’d heard lots about them on various podcasts and was really intrigued. I wanted to ‘really familiarize myself with the whole thing’ (how’s that for a vague goal?).

A year later, do I think I’ve succeeded? Yes. It’s stated in a very unclear fashion and can be taken to mean a lot of things, but since then, I’ve been working pretty closely with containers and Kubernetes pretty much the entire time. Yes, there’s still some things that are unclear to me, but overall, I think I understand containers well enough to have cleared that goal.

NixOS

I installed NixOS on my personal computer pretty much exactly a year ago. It was (and still is) my first real foray into the Linux world. I’d used Ubuntu for some university assignments previously, but that’s it. It took me a good few weeks to get my external monitors up and working, but these days, most everything is running pretty smoothly (except for that NVidia GPU I’ve got …).

I said that I’d ‘just got it set up over Christmas, but there is still a lot left to be done and a lot of things to discover.’ A year later, I’m fairly comfortable in NixOS, I use Nix for a lot of work related stuff, and I love it. I’d say I’ve finished the NixOS setup and discovered much, though there is still a lot left to be done. What I have achieved is: I have a stable operating system that works as I expect and with a window manager / desktop environment that I enjoy. I’m happy with the OS state of things for now. Achieved.

Give a talk

Said at the time: ‘This is something I’ve been wanting to do for a while, and I think this year is the right time, once I’m out of uni and have a bit more time (and experience) on my hands.’

This is probably the one goal I set for myself that is the easiest to measure. I spoke internally at my company about Algebraic Data Types, I held a workshop (still counts) on re-implementing some basic data types and creating opaque types at Oslo Elm Meetup, and I held a talk about the new async/.await features in Rust at Rust Oslo. Done.

Non-goals

In addition to the above goals, I also achieved a few other things that I’d like to highlight in this post. These are items that I didn’t put on my to-do list, but that I consider significant enough to mention.

Kubernetes/OpenShift
I started working in a team where OpenShift is an important part of the workflow, and interacting with it became a natural part of my day-to-day work.
Started blagging
Took me a while, but I finally got going. It’s been a lot of work, but very rewarding.
Became a co-organizer of the Rust Oslo Meetup group
I’m excited to see what we can do to further engage the Rust community in the vicinity and what we’ll be doing in the coming year.

2020

Looking forward to 2020, what do I want to achieve? What skills do I want to develop? After some thought, I’ve come to the conclusion that my ‘theme’ for 2020’s goals is mastery. I’ve found tools that I really like and that align with my way of thinking. However, I don’t feel like I’ve fully mastered any of them just yet. Most of my goals will revolve around improving my skill with tools that I already know and love.

Goals

Here’s a loose definition of my goals for the coming year.

General goals

These are the general goals that I’m setting for myself. These are the ones I will actively be pursuing and working towards.

More infrastructure work
Building on the ‘containers’ goal of this past year, I want to keep working on infrastructure: Kubernetes, CI/CD pipelines, containers, the whole deal. I went through a Red Hat training course this year, working towards a certification as a ‘Specialist in OpenShift Application Development’. I failed the exam, but learned a lot in the process. Let’s see if I can use that to pass! Apart from that one specific goal, this post is more about making myself familiar with the tools and getting really comfortable with them. You know, mastering them.
More Nix
It’s been a year of NixOS, and it’s been great. Now I want to get really comfortable with the difference between build.nix, default.nix, shell.nix, how nix-build works, etc. Specifically, I’d like to be comfortable with setting up a Haskell dev env with Nix and Cabal.
More Emacs
Emacs has been my go-to editor for a year and a half now, and I still get stuck in weird states sometimes. This coming year, I want to get more comfortable with it on a deeper level. Step one: Read the built-in Emacs manual.
More community engagement
I gave some talks and workshops, both internally and at local meetups this past year. I want to do more of that in 2020. Also: apply for more conference talks. I applied to a few this past year, but didn’t get any of them.
Programming Language Theory / Compilers
I’ve had an interest in type systems and PLT for a while. While writing this post, I realized I should do something about it and ordered Types and Programming Languages by Benjamin C. Pierce. I don’t know enough about it to set any clear goals just yet, however.

Languages

While I’m not going into this coming year with any clear language goals in mind, I’d like this to serve as a recording of what languages have me the most curious at the moment. I’m not making any clear commitments to learning any of these, but I think it’s important to keep an open mind and keep exploring new languages and new concepts. As Scott Wlaschin talks about in his talk Four Languages from Forty Years Ago, you should look for languages that expand your mind and help you think about problems in a new way. He awards languages that achieve this a ‘Galaxy Brain Seal of Approval’. There is an Alan Perlis quote in the slides saying that ‘A language that doesn’t affect the way you think about programming, is not worth knowing’.

Haskell
Yeah, I’ve been doing a bit of this, but not enough. I’ve got the basics down, but I need to put it into practice. I want to build something; a simple Web API or command line interface should be enough.
Lisp
Doesn’t matter whether it’s common lisp, scheme, (typed) racket, or clojure: I want to get comfortable with lisp. If nothing else, at least it’ll help me configure Emacs.
Elixir
This is the wildcard on the list. I have not really looked much at it at all, but I hear great things. Should offer some nice Galaxy Brain moments.

SMART goals

The goals listed above are generally quite vague, and it’d be hard to say for sure whether I’ve achieved them or not. To make them more actionable, let’s create a set of SMART goals that I can tick. What are SMART goals? Of course, Wikipedia has an article on it, but the short version is that SMART is an acronym that stands for ‘Specific, Measurable, Achievable, Relevant, Time-bound’. Organized by topic, this is my list:

Programming Language Theory
I don’t have enough insight into this to make any clear goals for it, but I’ve had a brief look at the Types and Programming Languages book. It’s about 650 pages long. I’d say I should get through it by July at latest. In short: Read Types and Programming Languages by July.
Community engagement
  • Give at least three talks/workshops at community events.
  • Apply as a speaker to at least three conferences.
Emacs
By February 1st, I want to have gotten through the Emacs manual. At this point, I may decide to try and (at least start to) configure Emacs from the ground up (rather than relying on Spacemacs), but that is not a requirement.
Nix
Write about what the various .nix files are and what their intended use is. Should discuss at least build.nix, default.nix, and shell.nix.
OpenShift
Pass the Red Hat Certified Specialist in OpenShift Application Development exam.
Containers / Infrastructure / Haskell / Nix

This is a set of goals that apply to broad range of topics. I want to manage a Kubernetes cluster somewhere and create a Haskell API that serves requests. The Haskell project should use Nix for as much as practically possible. What sort of data the API serves doesn’t matter.

Deadlines:

April 1st
Have Kubernetes cluster up and running.
June 1st
Have the Haskell API up and running.

And that’s what I’m planning to look into in the coming year. As always: plans change, life happens, and things inevitably don’t go as expected, but it’s nice to have a little something to look back on, at least. What do you think? Do you have any clear goals?

First Prev Page 1 Next Last