program :: input -> output
Computers blindly follow orders, and at some fundamental level, programming is about giving computers orders to follow. The expectation is that when a computer carries them out, it will achieve a particular goal a programmer had in mind. Coming up with these orders, however, is not easy. While computers are thorough and efficient, they are also rather limited in the vocabulary of instructions they can understand. They mostly know how to count and store data, which means our orders, as complex as they may be, can only be conveyed in those terms. Even deceptively simple programs can involve thousands or millions of these instructions, each of which needs to be correct and executed at the right time. And if we consider that computers won’t judge whether any of these instructions are right or wrong, some of which could have important consequences on our lives, it should be easy to appreciate how taking appropriate measures to prevent undesirable outcomes is the logical thing to do. In order to accomplish this, however, we need a different perspective.
Perhaps surprisingly, these computer instructions are a bad tool for reasoning about computer programs. Yet, it is only through reasoning that we can be confident about the correctness of our solutions, and more importantly, about the correctness of our problems. At the other end of the programming spectrum, far away from computers, we have our imagination. Programming is the conversation that happens while these two ends struggle to understand each other. We will explore this process and how to improve it.
This book doesn’t assume any previous programming knowledge, yet both newcomers and experienced programmers are welcome. Those approaching software development for the first time will discover a fascinating field while acquiring a good understanding of the principles and tools involved, whereas the experienced shall have their conceptions challenged by a different type of programming. We will use the Haskell programming language as our main vehicle, and while this book is not solely about Haskell but about programming, it will be very thorough at it.
But please, be patient. We prioritize the kind of understanding that stays with us for a very long time, the one that teaches questions. This means the topics in this book are presented in a rather unorthodox and entertaining fashion where immediate practicality never seems to be a goal. Don’t worry, we will get there. And when we do, we won’t need to look back.
We can’t really talk about programming without defining what a program is, which we can’t do unless we understand exactly what it is that we are trying to solve. For this, we first need to sample the world for suitable problems to tackle, a seemingly intractable endeavour unless we understand the limits and purpose of the field, for which we most certainly need to be able to talk about it. This is the very nature of new ideas, and why appreciating them is often so hard. We need to break this loop somehow. We need to acknowledge that there might be a problem to be tackled even if we can’t readily recognize it at first, and see how the proposed idea tries to approach it.
Breaking this loop is not a particularly hard thing to do in our imagination. A little dare, that’s all we need. There, boundaries and paradoxes are gone. We can fantasize allegedly impossible things, dwell on problems that they say can’t be solved. We can build, we can explore every future. To indulge in thought that challenges our understanding and that of others is not only interesting, it is a requirement. A curious and challenging mind is what it takes to make this journey.
But why would we do that? Why would we care? Well, look around. What do you see? There is a computer in our phone, there is another one in our car. There is one for talking to our families and another one for knowing how much money we have. There are computers that spy, there are computers that judge. There are computers that help businesses grow, and others that take jobs away. We even have computers saying whether we matter at all. Computers are our civilization, we have put them up there quite prominently. We care about programming because it is our voice, and only those who know how to speak stand a chance. It’s our power, freedom and responsibility to decide where to go next. We care because civilization is ours to build, ours to change, ours to care for.
Let’s picture ourselves in a not so distant past looking at our landline phone, looking at the elevator in our building, looking at our clock. Besides their usefulness, these machines have in common a single purpose. Not a shared single purpose, but a single different one each of them. The phone phones, the elevator elevates, and that is all they will ever do. But these machines, however disparate, at some fundamental level are essentially just interacting with their environment. Somehow they get inputs from it, and somehow they observably react according to pre-established logical decisions. We press a key and a phone rings. Another key and we are on a different floor. Yet, we rarely see these machines and their complex and expensive electronic circuitry repurposed to solve a different problem if necessary.
What happens is that the wires and electronics in these machines are connected in such a way that their accomplishments are mostly irrelevant to the wider electronic community. Like those steampunk contraptions, they each represent a solution to a finite understanding of one particular problem, and not more. But, is sending electricity to the engine that lifts the elevator really that different from sending it to the one that moves the hand of the clock? No, it is not. The logical decisions however, of when to do what and how, are. We say that the purpose of these machines is hardwired into their very existence, and it cannot be altered without physically modifying the machine itself. And yes, perhaps it is possible to adjust the time in a clock, but subtle variations like that one are considered and allowed upfront by the grand creators of the machine, who leave enough cranks, switches and wires inside that we can alter this behaviour somehow within a known limited set of possibilities.
But what about the unknown? If interacting with the environment is the essence of these machines, and moreover, if they are built with many of the same electronic components: Isn’t there some underlying foundation that would allow them to do more than what their upbringing dictates? Luckily for us, and for the steampunk fantasies that can now characterize themselves as nostalgic, there is. And this is what computers, otherwise known as general purpose machines, are all about.
What differentiates computers from other machines is that their purpose, instead of being forever imprinted in their circuitry by the manufacturer, is programmed into them afterwards using some programming language such as that Haskell thing we mentioned before. The general purpose machine is a canvas and a basic set of instructions that we can combine in order to achieve a purpose of our own choosing while interacting with the outside environment through peripherals such as keyboards, microphones or display screens. A computer program, in other words, is something that takes into account some inputs from the environment where it runs, and after doing some calculations, provides some kind of output in return. We can introduce some notation to represent this idea.
program :: input -> output
Prose is an excellent medium for conveying ideas to other human beings. However, prose can be ambiguous, it wastes words talking about non-essential things, and it allows for the opportunity to express a same idea in a myriad of different ways. These features, while appealing to the literary explorer, just hinder the way of the programmer. Computers can’t tell whether we ordered them to do the right thing, so we can’t afford to have ambiguities in our programs. We need to be precise lest we accomplish the wrong things. Irrelevant stuff may entertain us as great small talk material, but computers just don’t care, so we avoid that altogether. This is why a precise notation like the one above becomes necessary.
Perhaps surprisingly, our notation, also known as syntax, is part
of a valid program written in the Haskell programming language, a
language that we use to program computers, to tell them what to do.
This is
why among many other fascinating reasons we’ll come to cherish, this
book uses Haskell as its main vehicle. The distance between our thoughts
and our Haskell programs is just too small to ignore. We can read
program :: input → output
out loud as “a program from inputs into
outputs”, and as far as Haskell is concerned, it would be alright.
Computers themselves don’t understand this Haskell notation directly,
though. We use this notation for writing our programs because it brings
us clarity, but computers don’t care about any of that. For this, we
have compilers. Compilers are programs that take as input the
description of our program, called source code, here specified using
the Haskell programming language, and convert it to an executable
sequence of instructions that computers can actually understand and
obey. That is, we could say a compiler is itself a program :: source
code → executable
, if we wanted to reuse the same notation we used
before.
Compiling is a complex, time-consuming and error prone operation, so it’s usually done just once and the resulting executable code is reused as many times as necessary. When we install a new program on our computer, for example, chances are we are installing its executable version, not its source code. Quite likely the author already compiled the program for us, so that we can start using our program right away rather than spend our time figuring out how to compile it ourselves first. This is not different from what happens when we buy a new car, say. The car has already been assembled for us beforehand, and from then on we can drive it whenever we want without having to re-assemble it.
What are our input
and output
, though, concretely? We don’t yet know
exactly, but it turns out it doesn’t really matter for our very first
program, the simplest possible one. Can we imagine what such program
could possibly do? A program that, given some input, any input, will
produce some output in return? If we think carefully about this for a
minute, we will realize that there’s not much it could do. If we were told
that the input was an apple, the output of the program could be apple
slices or apple juice. If we knew that the input was a number, then
the output could be that number times three. Or times five perhaps. But
what if we were told nothing about the input? What could we ever do
with it? Essentially, we wouldn’t be able do anything. We just wouldn’t
know what type of input we are dealing with, so the only sensible thing
left for us to do would be nothing. We give up and return the input to
whomever gave it to us. That is, our input
becomes our output
as
well.
program :: input -> input
Our program
, the simplest possible one, can now be described as simply
returning the input
that is provided to it. But of course, we can also
look at this from the opposite perspective, and say that the output
of
our program is also its input.
program :: output -> output
Celebrate if you can’t tell the difference between these two
descriptions of our program, because they are effectively
the same. When we said that our input
becomes our output
, we really
meant it. What is important to understand here is that whether we say
input
or output
doesn’t matter at all. What matters is that both of
these words appearing around the arrow symbol →
, respectively
describing the type of input and the type of output, are the same.
This perfectly conveys the idea that anything we provide to this program
as input will be returned to us. We give it an orange, we get
an orange back. We give it a horse, we get a horse back. In fact, we can
push this to the extreme and stop using the words input
or output
altogether, seeing how the notation we use already conveys the idea that
the thing to the left of the arrow symbol is the input type, and the
thing to its right the output type. That is, we know where the input is,
we know where the output is, we know they are the same type of thing,
and we don’t care about anything else. So we’ll just
name it x
, for
mystery.
program :: x -> x
This seemingly useless program is one of the very few
important enough to deserve a name of its own. This program is called
identity, and it is a fundamental building block of both programming
and mathematics. It might be easier to appreciate this name from a
metaphysical point of view. Who am I? It is me.
Indeed, philosophers rejoice, our program addresses the
existential question about the identity of beings at last, even if in a
tautological manner. In Haskell we call this program id
, short for
identity.
id :: x -> x
Actually, we could have named the identity program anything else.
Nothing mandates the name id
. As we saw before when we arbitrarily
named our mystery type x
, names don’t really matter to the computer.
However, they do matter to us humans, so we choose them according to
their purpose.
There is one last thing we should know about naming. Usually,
we don’t really call programs like these programs, we call them
functions. The term program does exist, but generally we use it to
refer to functions complex enough that they can be seen as final
consumer products, or things that do something yet they don’t exactly
fit the shape of a function. Text editors, web browsers or music players
are examples of what we would often call a program. In other words,
we can say we just learned about id
, the identity function.
Generally speaking, though, we use these terms more or less
interchangeably, as we have been doing so far.
We also have useful programs. They are less interesting, much more complicated, but deserve some attention too. We were able to discover the identity function because we didn’t know much about the types of things we were dealing with, and this ignorance gave us the freedom to unboundedly reason about our problem and derive truth and understanding from it. What would happen if we knew something about our types, though? Let’s pick a relatively simple and familiar problem to explore: The addition of natural numbers.
As a quick reminder, natural numbers are the zero and all the integer numbers greater than it. For example, 12, 0 and 894298731 are all natural numbers, but 4.68, π or -58 are not.
We said that our identity function, when given something as input, will
return that same thing as output. This is supposed to be true for all
possible types of inputs and outputs, which of course include the
natural numbers. In Haskell, we can refer to this type of numbers as
Natural
. That is, if for example we wanted to use the identity
function with a natural number like 3
, then our id
function would
take the following type:
id :: Natural -> Natural
That is, this function takes a Natural
number as input, and returns a
Natural
number as output. In other words, the type of this function
is Natural → Natural
. Yes, just like numbers, functions have their
own types as well, and they are easy to recognize because of the →
that appears in them, with their input type appearing to the left of
this arrow, and their output type to its right as we’ve
seen multiple times.
Presumably, if we ask for the id
of the Natural
number 3
, we will
get 3
as our answer. Indeed, this would be the behaviour of the
identity function. However, we also said that names can be arbitrarily
chosen. Usually we choose them in alignment with their purpose, but
what if we didn’t? What would happen if we named our id
function for
Natural
numbers something else instead? Let’s try.
add_one :: Natural -> Natural
Interesting. At first glance, if we didn’t know that we are dealing with
our funnily named identity function, we would expect a completely
different behaviour from this function. The name add_one
and the type
of this function suggest that given a Natural
number, we will add
one to that number. That is, given 3
we would get 4
. Very
interesting indeed.
And what if we were told that one of the following functions is our identity function, renamed once again, and the other one adds as many numbers as it states? Look carefully. Can we tell which is which?
add_two :: Natural -> Natural
add_three :: Natural -> Natural
Despair. Sorrow. We can’t tell. We can’t tell without knowing the answer
beforehand. The more we know about the things we work with, the less we
know about the work itself. When we knew nothing about x
, we knew
that all we could ever do was nothing. That is, a function x → x
,
for all possible types x
, could only ever have one behaviour:
Returning x
unchanged. But now we know about natural numbers. We know
we can count them, add them, multiply them and more, so learning that a
function has a type Natural → Natural
is not particularly helpful
anymore. A function of this type could process the Natural
number
given as input in any way it knows how, return the resulting Natural
number as output, and it would be technically correct, even if possibly
a fraud.
Is this the end? Have we lost? I am afraid that in a sense we have. This
is part of the struggle we talked about. Suddenly we can’t reason anymore, and
are instead left at the mercy of optional meaningful naming. If we
name something add_one
, then we better mean add one, because we have
no way to tell what’s what.
Those were the first few words of A Type of Programming, a book emphasizing sound reasoning, a playful mastery of our skills and a thirst for meaning. We start from the very beginning, and barely noticing it, as we learn Haskell, functional programming and types, we become well-versed in every topic we touch. This book, ultimately, aims to improve the ways and quality of the software we produce and demand as individuals and as civilization. If you think that's a worthy goal, if you are here to excel, then this book is for you.
When we say Functor f
, we are saying that f
is a covariant
functor, which essentially means that for each
function a → b
, there is a corresponding function f a → f b
that
is the result of lifting a → b
into f
. The name “covariant”, with
the co prefix this time meaning with or jointly,
evokes the idea that as the a
in f a
varies through a → b
, the
entirety of f a
varies to f b
with it, in the same direction.
But as with many precious things in life, it’s hard to appreciate why this is important, or at all interesting, until we’ve lost it. So let’s lose it. Let’s look at this matter from the other side, from its dual, a dual we find by just flipping arrows. So let’s try that and see what happens.
contramap :: ( a -> b)
-> (g a <- g b)
A covariant functor f
, we said, was one that could lift a function a
→ b
into a function f a → f b
. So, presumably, the dual of said
covariant functor f
—let’s call it “g
the contravariant”— is one
that lifts a function a → b
into g a ← g b
instead, arrow flipped.
Conceptually, this is perfect. In practice, in Haskell, function arrows
always go from left to right →
, never from right to left ←
, so we
need to take care of that detail. Let’s write the function arrow in the
expected direction, flipping the position of the arguments instead.
contramap :: ( a -> b)
-> (g b -> g a)
In Haskell, contramap
exists as the sole method of the typeclass
called Contravariant
, which is like Functor
but
makes our minds bend.
class Contravariant (g :: Type -> Type) where
contramap :: (a -> b) -> (g b -> g a)
Bend how? Why, pick a g
and see. Maybe
perhaps? Sure, that’s simple
enough and worked for us before as a Functor
.
contramap :: (a -> b) -> (Maybe b -> Maybe a)
Here, contramap
is saying that given a way to convert a
s into
b
s, we will be granted a tool for converting Maybe b
s into
Maybe a
s. That is, a lifted function with the positions of its
arguments flipped, exactly what we wanted. Let’s try to implement this
by just following the types, as we’ve done many times before. Let’s
write the Contravariant
instance for Maybe
.
instance Contravariant Maybe where
contramap = \f yb ->
case yb of
Nothing -> Nothing
Just b -> ‽
What a pickle. We know that f
is of type a → b
, we know that yb
is a
Maybe b
, and we know that we must return a value of type Maybe a
somehow. Furthermore, we know that a
and b
could be anything, for
they’ve been universally quantified. There is an implicit ∀
in there,
always remember that. When yb
is Nothing
, we just return a new
Nothing
of the expected type Maybe a
. And when yb
is Just b
? Why,
we die of course, for we need a way to turn that b
into an a
that
we can put in a new Just
, and we have none.
But couldn’t we just return Nothing
? It is a perfectly valid
expression of type Maybe a
, isn’t it? Well, kind of. It type-checks,
sure, but a vestigial tingling sensation tells us it’s wrong. Or,
well, maybe it doesn’t, but at least we have laws that should help us
judge. So let’s use these laws to understand why this behaviour would be
described as evil, lest we hurt ourselves later on. Here is the
simplified version of the broken instance we want to check, the one that
always returns Nothing
.
instance Contravariant Maybe where
contramap = \_ _ -> Nothing
Like the identity law for fmap
, the identity law for contramap
says
that contramap
ping id
over some value x
should result in that
same x
.
contramap id x == x
A broken contramap
for Maybe
that always returns
Nothing
would blatantly violate this law. Applying contramap id (Just
5)
, for example, would result in Nothing
rather than Just 5
. This
should be enough proof that ours would be a broken Contravariant
instance, but for completeness, let’s take a look at the second
contramap
law as well.
Just like we have a composition law for fmap
, we have one for
contramap
as well. It says that contramap
ping the composition of
two functions f
and g
over some value x
should achieve the same as
applying, to that x
, the
composition of the contramap
ping of g
with the contramap
ping
of f
.
contramap (compose f g)
== compose (contramap g) (contramap f)
Isn’t this the same as the composition law for fmap
? Nice try, but
take a closer look.
fmap (compose f g)
== compose (fmap f) (fmap g)
contramap (compose f g)
== compose (contramap g) (contramap f)
Whereas in the case of fmap
, the order in which f
and g
appear at
opposite sides of this equality is the same, it is not the same in the
case of
contramap
. This shouldn’t come as a big surprise, seeing how
we already knew that contramap
gives us a lifted function with its
arguments in the opposite position. Intuitively, dealing with
Contravariant
values means we are going to be composing things
backwards. Or, should we say, forwards? After all, compose
initially
shocked us with its daring, so-called backwards sense of direction,
and now we are mostly just backpedaling on it.
But the important question is whether our broken Contravariant
instance for Maybe
violates this law. And the answer is,
unsurprisingly I hope, not at all. Both sides of this equality
always result in Nothing
, so they are indeed equal. Many
times they will be equally wrong, but that doesn’t make them any less
equal. And this is, technically, sufficient for our broken instance to
satisfy this law. Thankfully we had that other law we could break.
So how do we contramap
our Maybe
s? Well, we don’t. As handy as it
would be to have a magical way of going from b
to a
when all we know
is going from a
to b
, we just can’t do that. Think how absurd it
would be. If your imagination is lacking, just picture a
being a tree and b
being fire.
So, to sum up, Maybe
s are covariant
functors, as witnessed by the existence of their Functor
instance, but
they are not contravariant functors at all, as witnessed by the
impossibility of coming up with a lawful Contravariant
instance for them.
So once again, like in our previous Bifunctor
conundrum, we
find ourselves wanting for a function b → a
when all we have is a
function a → b
. This time, however, we are prepared. We know that we
keep ending up here because we are getting the position of our
arguments, the variance of our functors, wrong. And we will fix that.
Wait. Covariant functors? Contravariant functors? That’s right,
both these things are functors. All that talk we had growing up
about how functors are this or that? A lie. Well, not a lie, but
rather, we hid the fact we were talking only about covariant functors.
Now we know that there are other types of functors too. Unfortunately,
the Haskell nomenclature doesn’t help here. For example, one could argue
that the Functor
typeclass should have been called Covariant
instead, or perhaps the Contravariant
typeclass should be called
Cofunctor
and contramap
should be renamed to cofmap
, etc. Anyway,
not important. As we continue learning more about functors we’ll see
that even these names fall short of the true nature of functors.
There’s more, yes, and it’s beautiful. But despite their names, both
Contravariant
and Functor
are typeclasses that describe, indeed,
functors. And it shouldn’t surprise us, considering how we saw in
detail that except for their opposite argument positions, the types of
fmap
and contramap
, as well as their laws, are exactly the same.
$39 — Beautifully typeset in various formats suitable for deskop, web, mobile, e‑readers, and even for printing.