This post is a consolidation of my thoughts following discussions with Neil Sculthorpe and Paulo Capriotti on free monads. To understand this post you will need to have some understanding of what functors, monoids, applicative functors and monads are in the context of Haskell.

Recently I’ve been thinking a bit about free constructions in computer science, and associativity laws. A free construction is essentially a generic way of creating a certain kind of structure. For example, a free monoid over a given set X is the monoid created by giving X the minimal amount of structure so that it is a monoid; this turns out to be lists of elements of X.

The free monoid can be defined in at least two ways in Haskell. Firstly, we have the usual Haskell definition of lists:

`data [a] = [] | a : [a]`

Secondly, we have the alternative definition of so-called *snoc-lists*, which differ from lists by being defined by appending rather than prepending elements:

`data ListL a = Nil | Snoc (ListL a) a`

We can define monoid instances for both of these types:

```
instance Monoid [a] where
mempty = []
mappend [] ys = ys
mappend (x:xs) ys = x : (mappend xs ys)
```

```
instance Monoid (ListL a) where
mempty = Nil
mappend xs Nil = xs
mappend xs (Snoc ys y) = Snoc (mappend xs ys)
```

Essentially the trick with both definitions is as follows: a monoid requires two things, an identity and a multiplication operation. The free monoid adds the identity, and then provides a multiplication operation that takes one element of the original type and one of the free monoid type. We can then extend this operation to the general case where both operands are from the free monoid in a fairly natural way. The only difference between these definitions is which operand of the multiplication is of the original type. (I don’t know how to make this idea rigorous, but I think it applies to a large number of monoidal structures.)

It’s fairly straightforward to show that these definitions are isomorphic [1] (in the sense that there is an isomorphism respecting the monoid operations), which I leave as an exercise for the particularly bored reader. However, while these definitions are isomorphic, they are not operationally equivalent. It turns out that the first of these is more efficient for right-associated *mappend*s, while the second is more efficient for *left*-associated *mappend*s. To see why this is the case, consider the associativity rule:

`mappend (mappend xs ys) zs == mappend xs (mappend ys zs)`

If the lists here are the usual Haskell lists, then the left-hand side requires traversing *xs* twice, while the first only traverses it once. This is because *mappend* is defined by recursion on its first argument. In the case of snoc-lists, the left-hand side is more efficient.

There is another way to look at this. Consider the structure of an example Haskell list:

`xs = x1 : (x2 : (x3 : (x4 : [])))`

Compare this to the structure of the equivalent snoc-list:

`xs = Snoc (Snoc (Snoc (Snoc Nil x1) x2) x3) x4`

The first is a structure that associates to the right, while the second associates to the left. When *mappends* follow the natural structure of the monoid, they will be more efficient.

This pattern of having two possible definitions is not limited to free monoids. Paulo Capriotti and Ambrus Kaposi of my lab have a paper on free applicative functors [2] that gives two definitions of the free applicative functor, one which naturally associates to the right, and one to the left. Here is the right-associating one:

```
data FreeA f a where
Pure :: a -> FreeA f a
Ap :: f (a -> b) -> FreeA f a -> FreeA f b
```

Once again, we see that the monoid-like operation has one operand using the original functor and one using the free applicative functor. The paper also shows an isomorphic left-associated version, and once again, there are some efficiency implications.

What I would like to do is to extend this idea to monads. We can do this rightwards as follows:

```
data FreeMR f a where
ReturnR :: a -> FreeMR f a
BindR :: f a -> (a -> FreeMR f b) -> FreeMR f b
```

Using the Yoneda lemma this definition turns out to be equivalent to the usual definition of the free monad, provided *f* is a functor:

`data FreeMR f a = ReturnR a | RollR (f (FreeMR f a))`

The result is a structure that is more efficient for right-associated binds than left-associated ones.

However, when we attempt to do this leftwardsly, we hit a problem. The obvious thing to try is:

```
data FreeML f a where
ReturnL :: a -> FreeML f a
BindL :: FreeML f a -> (a -> f b) -> FreeML f b
```

Yoneda-isomorphically:

`data FreeML f a = ReturnL a | RollL (FreeML f (f a))`

But it turns out this is not isomorphic to the right-associating version. For example, consider the functor

`data Dup a = Dup a a`

With *FreeMR*, we obtain leaf-labelled binary trees of *a*. With *FreeML*, however, we obtain so-called *perfect* trees — binary trees in which all leaves are at the same depth — which is clearly not equivalent. Thus, it seems that there is no obvious way to make a free monad for which left-associated binds are more efficient than right-associated binds. However, if I were to find such a thing, it would be very useful for my research. Hence the title of this post:

*What We Need is a Free Monad of the Left!*

[1] When I make statements like this, I mean in some hypothetical total fragment of Haskell.

[2] http://paolocapriotti.com/blog/2013/04/03/free-applicative-functors/

I think the left-associated form is more restricted because you have to commit to the structure of the computation in advance. e.g. the number of BindL constructors is predetermined. Whereas in the right-associated form, the structure of the computation (e.g. the number of BindR constructors) depends on the value of the “a” argument.

Minor point: you’re missing a “y” in the Monoid instance for ListL.