program :: input -> output
Computers blindly follow orders, and at some fundamental level, programming is about giving computers orders to follow. Ultimately, the expectation is that when a computer carries them out, it will achieve a particular goal a programmer had in mind. Coming up with these orders, however, is not easy. While computers are thorough and efficient, they are also rather limited in the vocabulary of instructions they can understand. They mostly know how to do arithmetic and store data, which means our orders, as complex as they may be, can only be conveyed in those terms. Even deceptively simple programs can involve thousands or millions of these instructions, each of which needs to be correct and executed at the right time. And if we consider that computers won’t judge whether any of these instructions are right or wrong, some of which could have important consequences on our lives, it should be easy to appreciate how taking appropriate measures to prevent undesirable outcomes is the logical thing to do. In order to accomplish this, however, we need a different perspective.
Surprisingly, these computer instructions are a bad tool for reasoning about computer programs. Yet, it is only through reasoning that we can be confident about the correctness of our solutions, and more importantly, about the correctness of our problems. At the other end of the programming spectrum, far away from computers, we have our imagination. Programming is the conversation that happens while these two ends struggle to understand each other. We will explore this process and how to improve it.
This book doesn’t assume any previous programming knowledge, yet both newcomers and experienced programmers are welcome. Those approaching software development for the first time will discover a fascinating field while acquiring a good understanding of the principles and tools involved, whereas the experienced shall have their conceptions challenged by a different type of programming. We will use the Haskell programming language as our main vehicle, and while this book is not solely about Haskell but about programming, it will be very thorough at that.
But please, be patient. We prioritize the kind of deep and holistic understanding that stays with us for a very long time, the one that teaches questions. This means the topics in this book are presented in a rather unorthodox and entertaining fashion where immediate practicality never seems to be a goal. Don’t worry, we will get there. And when we do, we won’t need to look back.
We can’t really talk about programming without defining what a program is, which we can’t do unless we understand exactly what it is that we are trying to solve. For this, we first need to sample the world for suitable problems to tackle, a seemingly intractable endeavour unless we understand the limits and purpose of the field, for which we most certainly need to be able to talk about it. This is the very nature of new ideas, and why appreciating them is often so hard. We need to break this loop somehow. We need to acknowledge that there might be a problem to be tackled even if we can’t readily recognize it at first, and see how the proposed idea tries to approach it.
Breaking this loop, actually, is not a particularly hard thing to do in our imagination. A little dare, that’s all we need. There, boundaries and paradoxes are gone. We can fantasize allegedly impossible things and dwell on problems they say can’t be solved. We can build, we can explore every future. To indulge in thought that challenges our understanding and that of others is not only interesting, it is a requirement. A curious and challenging mind is what it takes to make this journey.
But why would we do that? Why would we care? Well, look around. What do you see? There is a computer in our phone, there is another one in our car. There is one for talking to our families and another one for knowing how much money we have. There are computers that spy, there are computers that judge. There are computers that help our business grow, and some, they chant, that take our jobs away. We even have computers saying whether we matter at all. Computers are our civilization, we have put them up there quite prominently. We care about programming because it is our voice, and only those who know how to speak stand a chance. It’s our power, freedom and responsibility to decide where to go next. We care because civilization is ours to build, ours to change, ours to care for.
Let’s picture ourselves in a not so distant past looking at our landline phone, looking at the elevator in our building, looking at our clock. Besides their usefulness, these machines have in common a single purpose. Not a shared single purpose unfortunately, but a single different one each of them. The phone phones, the elevator elevates, and that is all they will ever do. But these machines, however disparate, at some fundamental level are essentially just interacting with their environment. Somehow they get inputs from it, and somehow they observably react according to pre-established logical decisions. We press a key and a phone rings. Another key and we are on a different floor. Yet, we rarely see these machines and their complex and expensive electronic circuitry repurposed to solve a different problem if necessary.
What happens is that the wires and electronics in these machines are connected in such a way that their accomplishments are mostly irrelevant to the wider electronic community. Like those steampunk contraptions, they each represent a solution to a finite understanding of one particular problem, and not more. But, is sending electricity to the engine that lifts the elevator really that different from sending it to the one that moves the hand of the clock? No, it is not. The logical decisions however, of when to do what and how, are. We say that the purpose of these machines is hardwired into their very existence, and it cannot be altered without physically modifying the machine itself. And yes, perhaps it is possible to adjust the time in a clock, but subtle variations like that one are considered and allowed upfront by the grand creators of the machine, who left enough cranks, switches and wires inside that we can alter this behaviour somehow within a known limited set of possibilities.
But if interacting with the environment is the essence of these machines, and moreover, if they are built with many of the same electronic components: Isn’t there some underlying foundation that would allow them to do more than what their upbringing dictates? Luckily for us, and for the steampunk fantasies that can now characterize themselves as nostalgic, there is. And this is what computers, otherwise known as general purpose machines, are all about.
What differentiates computers from other machines is that their purpose, instead of being forever imprinted in their circuitry by the manufacturer, is programmed into them afterwards using some programming language such as that Haskell thing we mentioned before. The general purpose machine is a canvas and a basic set of instructions that we can combine in order to achieve a purpose of our choosing while interacting with the outside environment through peripherals such as keyboards, microphones or display screens. A computer program, in other words, is something that takes into account some inputs from the environment where it runs, and after doing some calculations, provides some kind of output in return. We can introduce some notation to represent this idea.
program :: input -> output
Prose is an excellent medium for conveying ideas to other human beings. However, prose can be ambiguous, it wastes words talking about non-essential things, and it allows for the opportunity to express a same idea in a myriad of different ways. These features, while appealing to the literary explorer, just hinder the way of the programmer. Computers won’t judge whether we ordered them to do the right thing, so we can’t afford to have ambiguities in our programs. We need to be precise lest we accomplish the wrong things. Irrelevant stuff may entertain us as great small talk material, but computers just don’t care, so we avoid that altogether. This is why a precise notation like the one above becomes necessary.
Perhaps surprisingly, our notation, also known as syntax, is part
of a valid program written in the Haskell programming language, a
language that we use to program computers, to tell them what to do.
why among many other fascinating reasons we’ll come to cherish, this
book uses Haskell as its main vehicle. The distance between our thoughts
and our Haskell programs is just too small to ignore. We can read
program :: input → output out loud as “a program from inputs into
outputs”, and as far as Haskell is concerned, it would be alright.
Computers themselves don’t understand this Haskell notation directly,
though. We use this notation for writing our programs because it brings
us clarity, but computers don’t care about any of that. For this, we
have compilers. Compilers are programs that take as input the
description of our program, called source code, here specified using
the Haskell programming language, and convert it to an executable
sequence of instructions that computers can actually understand and
obey. That is, we could say a compiler is itself a
program :: source
code → executable, if we wanted to reuse the same notation we used
Compiling is actually a complex, time-consuming and error prone operation, so it’s usually done just once and the resulting executable code is reused as many times as necessary. When we install a new program in our computer, for example, chances are we are installing its executable version, not its source code. Quite likely the author already compiled the program for us, so that we can start using our program right away rather than spend our time figuring out how to compile it ourselves first. This is not different from what happens when we buy a new car, say. The car has already been assembled for us beforehand, and from then on we can drive it whenever we want without having to re-assemble it.
What are our
output, though, concretely? We don’t yet know
exactly, but it turns out it doesn’t really matter for our very first
program, the simplest possible one. Can we imagine what such program
could possibly do? A program that, given some input, any input, will
produce some output in return? If we think carefully about this for a
minute, we will realize that there’s not much it could do. If we were told
that the input was an apple, the output of the program could be apple
slices or apple juice. If we knew that the input was a number, then
the output could be that number times three. Or times five perhaps. But
what if we were told nothing about the input? What could we ever do
with it? Essentially, we wouldn’t be able do anything. We just wouldn’t
know what type of input we are dealing with, so the only sensible thing
left for us to do would be nothing. We give up and return the input to
whomever gave it to us. That is, our
input becomes our
program :: input -> input
program, the simplest possible one, can now be described as simply
input that is provided to it. But of course, we can also
look at this from the opposite perspective, and say that the
our program is also its input.
program :: output -> output
Celebrate if you can’t tell the difference between these two
descriptions of our program, because they are effectively
the same. When we said that our
input becomes our
output, we really
meant it. What is important to understand here is that whether we say
output doesn’t matter at all. What matters is that both of
these words appearing around the arrow symbol
describing the type of input and the type of output, are the same.
This perfectly conveys the idea that anything we provide to this program
as input will be returned to us. We give it an orange, we get
an orange back. We give it a horse, we get a horse back. In fact, we can
push this to the extreme and stop using the words
altogether, seeing how the notation we use already conveys the idea that
the thing to the left of the arrow symbol is the input type, and the
thing to its right the output type. That is, we know where the input is,
we know where the output is, we know they are the same type of thing,
and we don’t care about anything else. So, let’s just name this
program :: x -> x
This seemingly useless program, actually, is one of the very few
important enough to deserve a name of its own. This program is called
identity, and it is a fundamental building block of both programming
and mathematics. It might be easier to appreciate this name from a
metaphysical point of view. Who is the horse? The horse. Who am I? It
is me. Indeed, philosophers rejoice, our program addresses the
existential question about the identity of beings at last, even if in a
tautological manner. In Haskell we call this program
id, short for
id :: x -> x
Actually, we could have named the identity program anything else.
Nothing mandates the name
id. As we saw before when we arbitrarily
named our mystery type
x, names don’t really matter to the computer.
However, they do matter to us humans, so we choose them according to
There is one last thing we should know about naming. Usually,
we don’t really call programs like these programs, we call them
functions. The term program does exist, but generally we use it to
refer to functions complex enough that they can be seen as final
consumer products, or things that do something yet they don’t exactly
fit the shape of a function. Text editors, web browsers or music players
are examples of what we would often call a program. In other words,
we can say we just learned about
id, the identity function.
Generally speaking, though, we use these terms more or less
interchangeably, as we have been doing so far.
We also have useful programs. They are less interesting, much more complicated, but deserve some attention too. We were able to discover the identity function because we didn’t know much about the types of things we were dealing with, and this ignorance gave us the freedom to unboundedly reason about our problem and derive truth and understanding from it. What would happen if we knew something about our types, though? Let’s pick a relatively simple and familiar problem to explore: The addition of natural numbers.
As a quick reminder, natural numbers are the zero and all the integer numbers greater than it. For example, 12, 0 and 894298731 are all natural numbers, but 4.68, π or -58 are not.
We said that our identity function, when given something as input, will
return that same thing as output. This is supposed to be true for all
possible types of inputs and outputs, which of course include the
natural numbers. In Haskell, we can refer to this type of numbers as
Natural. That is, if for example we wanted to use the identity
function with a natural number like
3, then our
id function could
take the following type:
id :: Natural -> Natural
That is, this function takes a
Natural number as input, and returns a
Natural number as output. In other words, the type of this function
Natural → Natural. Yes, just like numbers, functions have their
own types as well, and they are easy to recognize because of the
that appears in them, with their input type appearing to the left of
this arrow, and their output type to its right.
Presumably, if we ask for the
id of the
3, we will
3 as our answer. Indeed, this would be the behaviour of the
identity function. However, we also said that names can be arbitrarily
chosen. Usually we choose them in alignment with their purpose, but
what if we didn’t? What would happen if we named our
id function for
Natural numbers something else instead? Let’s try.
add_one :: Natural -> Natural
Interesting. At first glance, if we didn’t know that we are dealing with
our funnily named identity function, we would expect a completely
different behaviour from this function. The name
add_one and the type
of this function suggest that given a
Natural number, we will add
one to that number. That is, given
3 we would get
And what if we were told that one of the following functions is our identity function, renamed once again, and the other one adds as many numbers as it states? Look carefully. Can we tell which is which?
add_two :: Natural -> Natural add_three :: Natural -> Natural
Despair. Sorrow. We can’t tell. We can’t tell without knowing the answer
beforehand. The more we know about the things we work with, the less we
know about the work itself. When we knew nothing about
x, we knew
that all we could ever do was nothing. That is, a function
x → x,
for all possible types
x, could only ever have one behaviour:
x unchanged. But now we know about natural numbers. We know
we can count them, add them, multiply them and more, so learning that a
function has a type
Natural → Natural is not particularly helpful
anymore. A function of this type could process the
given as input in any way it knows how, return the resulting
number as output, and it would be technically correct, even if possibly
Is this the end? Have we lost? I am afraid that in a sense we have. This
is part of the struggle we talked about. Suddenly we can’t reason anymore, and
are instead left at the mercy of optional meaningful naming. If we
add_one, then we better mean add one, because we have
no way to tell what’s what.
These are the first few words of A Type of Programming, a book emphasizing sound reasoning, a playful mastery of our skills and a thirst for meaning. We start from the very beginning, and barely noticing it, as we learn Haskell, functional programming and types, we become well-versed in every topic we touch. This book, ultimately, aims to improve the ways and quality of the software we produce and demand as individuals and as civilization. If you think that's a worthy goal, if you are here to excel, then this book is for you.
This is an ongoing work, you can buy your copy to continue reading. You will have immediate access to over 500 pages containing more than 110,000 words in web, PDF and e-book formats, to all future chapters as soon as they are published, and to a digital copy of the full book once it's complete. Would you like to see more advanced content first? Click here.
When we say
Functor f, we are saying that
f is a covariant
functor, which essentially means that for each
a → b, there is a corresponding function
f a → f b that
is the result of lifting
a → b into
f. The name “covariant”, with
the co prefix this time meaning with or jointly,
evokes the idea that as the
f a varies through
a → b, the
f a varies to
f b with it, in the same direction.
But as with many precious things in life, it’s hard to appreciate why this is important, or at all interesting, until we’ve lost it. So let’s lose it. Let’s look at this matter from the other side, from its dual, a dual we find by just flipping arrows. So let’s try that and see what happens.
contramap :: ( a -> b) -> (g a <- g b)
A covariant functor
f, we said, was one that could lift a function
→ b into a function
f a → f b. So, presumably, the dual of said
f —let’s call it “
g the contravariant”— is one
that lifts a function
a → b into
g a ← g b instead, arrow flipped.
Conceptually, this is perfect. In practice, in Haskell, function arrows
always go from left to right
→, never from right to left
←, so we
need to take care of that detail. Let’s write the function arrow in the
direction it’s expected, flipping the position of the arguments instead.
contramap :: ( a -> b) -> (g b -> g a)
contramap exists as the sole method of the typeclass
Contravariant, which is like
makes our minds bend.
class Contravariant (g :: Type -> Type) where contramap :: (a -> b) -> (g b -> g a)
Bend how? Why, pick a
g and see.
Maybe perhaps? Sure, that’s simple
enough and worked for us before as a
contramap :: (a -> b) -> (Maybe b -> Maybe a)
contramap is saying that given a way to convert
bs, we will be granted a tool for converting
Maybe bs into
Maybe as. That is, a lifted function with the positions of its
arguments flipped, exactly what we wanted. Let’s try to implement this
by just following the types, as we’ve done many times before. Let’s
Contravariant instance for
instance Contravariant Maybe where contramap = \f yb -> case yb of Nothing -> Nothing Just b -> ☠
A pickle. We know that
f is of type
a → b, we know that
yb is a
Maybe b, and we know that we must return a value of type
somehow. Furthermore, we know that
b could be anything, for
they’ve been universally quantified. There is an implicit
∀ in there,
always remember that. When
Nothing, we just return a new
Nothing of the expected type
Maybe a. And when
Just b? Why,
we die of course, for we need a way to turn that
b into an
we can put in a new
Just, and we have none.
But couldn’t we just return
Nothing? It is a perfectly valid
expression of type
Maybe a, isn’t it? Well, kind of. It type-checks,
sure, but a vestigial tingling sensation tells us it’s wrong. Or,
well, maybe it doesn’t, but at least we have laws that should help us
judge. So let’s use these laws to understand why this behavior would be
characterized as evil, lest we hurt ourselves later on. Here is the
simplified version of the broken instance we want to check, the one that
instance Contravariant Maybe where contramap = \_ _ -> Nothing
Like the Identity Law for
fmap, the Identity Law for
id over some value
x should result in that
contramap id x == x
Maybe that always returns
Nothing would blatantly violate this law. Applying
contramap id (Just
5), for example, would result in
Nothing rather than
Just 5. This
should be enough proof that ours would be a broken
instance, but for completeness, let’s take a look at the second
contramap law as well.
Just like we have a Composition Law for
fmap, we have one for
contramap as well. It says that
contramapping the composition of
g over some value
x should achieve the same as
applying, to that
composition of the
g with the
contramap (compose f g) x == compose (contramap g) (contramap f) x
Isn’t this the same as the Composition Law for
fmap? Nice try, but
take a closer look.
fmap (compose f g) x == compose (fmap f) (fmap g) x contramap (compose f g) x == compose (contramap g) (contramap f) x
Whereas in the case of
fmap, the order in which
g appear at
opposite sides of this equality is the same, it is not the same in the
contramap. This shouldn’t come as a big surprise, seeing how
we already knew that
contramap gives us a lifted function with its
arguments in the opposite position. Intuitively, dealing with
Contravariant values means we are going to be composing things
backwards. Or, should we say, forwards? After all,
shocked us with its daring, so-called backwards sense of direction,
and now we are mostly just backpedaling on it.
But the important question is whether our broken
Maybe violates this law. And the answer is,
unsurprisingly I hope, not at all. Both sides of this equality
always result in
Nothing, so they are indeed equal. Many
times they will be equally wrong, but that doesn’t make them any less
equal. And this is, technically, sufficient for our broken instance to
satisfy this law. Thankfully we had that other law we could break.
So how do we
Maybes? Well, we don’t. As handy as it
would be to have a magical way of going from
a when all we know
is going from
b, we just can’t do that. Think how absurd it
would be. If you need some
creativity, just imagine
a being a tree and
b being fire.
So, to sum up,
Maybes are covariant
functors, as witnessed by the existence of their
Functor instance, but
they are not contravariant functors at all, as witnessed by the
impossibility of coming up with a
Contravariant instance for them.
So once again, like in our previous
Bifunctor conundrum, we
find ourselves wanting for a function
b → a when all we have is a
a → b. This time, however, we are prepared. We know that we
keep ending up here because we are getting the position of our
arguments, the variance of our functors, wrong. And we will fix that.
Wait. Covariant functors? Contravariant functors? That’s right,
both these things are functors. All that talk we had growing up
about how functors are this or that? A lie. Well, not a lie, but
rather, we hid the fact we were talking only about covariant functors.
Now we know that there are other types of functors too. Unfortunately,
the Haskell nomenclature doesn’t help here. For example, one could argue
Functor typeclass should have been called
instead, or perhaps the
Contravariant typeclass should be called
contramap should be renamed to
cofmap, etc. Anyway,
not important. As we continue learning more about functors we’ll see
that even these names fall short of the true nature of functors.
There’s more, yes, and it’s beautiful. But despite their names, both
Functor are typeclasses that describe, indeed,
functors. And it shouldn’t surprise us, considering how we saw in
detail that except for their opposite argument positions, the types of
contramap, as well as their laws, are exactly the same.
This is an ongoing work, you can buy your copy to continue reading. You will have immediate access to over 500 pages containing more than 110,000 words in web, PDF and e-book formats, to all future chapters as soon as they are published, and to a digital copy of the full book once it's complete.