Note: Peano Arithmetic
Today, we're going to jump into the deep end of the pool, Haskell wise. Don't be put off if you don't understand everything. Today's lecture is mostly literature, i.e., your task as a student is to read and expand your mind. I'll expect a bit of mimcry at first, real comprehension will come in time.
Our subject for today is the Peano-Dedekind axioms for arithmetic. Yes, we're going to redo third-grade math, but as it might be done by a graduate student studying mathematical logic. The underlying structure is the natural numbers, 0, 1, 2, ….
The Peano-Dedekind axioms posit the existence of a constant $0$ and function $s$ (the successor function) with the following properties:
- $0$ is a natural number,
- $s$ is a one-one function from the natural numbers to the natural numbers, i.e., for all natural numbers $a$, $s(a)$ is also a natural number, moreover, for all natural numbers $a$ and $b$, if $s(a) = s(b)$ then $a = b$,
- 0 is not a successor, i.e., for all $a$, $s(a) \not= 0$.
- every natural number is either 0 or a successor, i.e., for all $a$, $a = 0$ or there exists $b$ such that $a = s(b)$.
We'll start with a data definition:
data NaturalNumber = Zero | S NaturalNumber
deriving (Show)
This introduces a new type. Haskell's type system is a central part of the language, and our ability to introduce new types that exploit built-in facilities of the language is both useful and powerful. The type system partitions the universe of values and expressions into equivalence classes, and limits how the elements of these equivalence classes can be combined. Types are denoted by type definitions (as above) and type expressions (of which more later).
The definition above expresses the intent that natural numbers come in two distinct forms (it is helpful here to read |
as ”or”): Zero
, and S x
where x
is also a NaturalNumber
. Thus, Zero
, S Zero
, S (S Zero)
are values (and expressions) of type NaturalNumber
, which we ordinarily know by the names zero
, one
, and two
respectively. Note here that S
is not just a function, it is a constructor, which can be thought of as a kind of labeled box that holds another value of type NaturalNumber
. Haskell uses a simple, mandatory notational convention here: variables have names that begin with lower-case letters (including _), while constructors begin with an upper-case letter. The deriving (Show)
instructs Haskell to print natural number values via constructor-based expressions.
Let's define a few names for common natural numbers:
-- common names for small natural numbers
zero = Zero
one = S zero
two = S one
three = S two
four = S three
five = S four
six = S five
seven = S six
eight = S seven
nine = S eight
ten = S nine
There are several things to notice here. The first is a simple comment, which is introduced by a double hyphen, --
. This, and any text that follows on the same line, is ignored by Haskell. This text is there to support our understanding. The next is the sequence of definitions, in which variables are bound to values denoted by expressions. Note that in Haskell, like many programming languages, we use =
to indicate binding. This essentially creates equality by fiat, but it is distinct from the equality predicate, which we'll see next.
$ ghci NaturalNumbers.hs
> three
S (S (S Zero))
Equality and Ordering
First, we'll write code that makes the NaturalNumber
type an instance of the Eq
typeclass.
instance Eq NaturalNumber where
Zero == Zero = True
Zero == S y = False
S x == Zero = False
S x == S y = x == y
We can make a tiny code improvement here: note that the x
and y
on the left-hand side of the bindings in the 2nd and 3rd line of the definition are not used on the right hand side. This means that we don't actually need to name a variable:
instance Eq NaturalNumber where
Zero == Zero = True
Zero == S _ = False
S _ == Zero = False
S x == S y = x == y
In Haskell, types are organized into typeclasses. This is a very different meaning of “class” than you might have encountered in a language like Java. It's actually closer to Java's notion of an interface
, but more powerful. The significance of a typeclass is that all types that belong to a given typeclass define certain common names. In the case of the Eq
typeclass, instances must implement one or both of ==
(the equality predicate) or /=
(the inequality predicate), and having done so, will have both defined for them. In this case, we're defining the equality predicate ==
, based on the Peano-Dedekind axioms: Zero == Zero
is a special case of reflexivity, Zero
does not equal any successor, and two successors are equal if and only if their predecessors are equal (this follows from $s$ being one-to-one).
There's no essential difference between defining an ordinary function (one that is applied in prefix to zero or more argument/s), and defining an infix binary function like ==
: we bind patterns to expressions to expressions in both cases. Indeed, from Haskell's point of view, infix binary functions are just syntactic sugar for ordinary (curried) binary functions. Having said that, the definition of equality is interesting, because it's recursive, i.e, we are defining ==
in terms of itself when we write S x == S y = x == y
. Being able to define a function by recursion is powerful, but it's not free. Such a definition is going to drive a sequence of substitutions (as we saw in Lecture 1), a sequence that might or might not terminate. In this case, ==
will terminate on grounded elements of the natural numbers, because we defined it without recursion on base cases (Zero
), and the recursive definition in constructed cases involved recursive calls of the function being defined on proper substructures of the argument. We will call recursions like this natural, or structural recursions, and the proof of termination follows from ordinary mathematical induction.
> ten == ten
True
> ten == nine
False
> ten /= nine
True
Let's understand how this works on a small example:
two == one
==> S one == S zero
==> one == zero
==> S zero == Zero
==> False
How many of you noticed the weasel-word “grounded” up above?! This is actually important, and it gets to one of the ways that Haskell is not like other languages that you might have experienced. Let's talk about this particular case. Zero
is grounded. This is a specific case of a general principle: all of the nullary constructors (i.e., constants) of an algebraic type are grounded. Also, if n
is grounded, then S n
is also grounded. Again, this is a specific case of a general principle: applying a k-ary constructor to k grounded values, results in grounded value. At this point, you're probably asking yourself, “Are there any ungrounded values?!” Yes. Consider
infinity = S infinity
What is infinity
? Note infinity
is defined recursively, but not naturally, because the right-hand side of the recursion mentions the left-hand side (rather than a proper substructure thereof). Therefore this is a potentially dangerous definition. Note that there's nothing special about Haskell in allowing you to make such a definition -- you can do something analogous in any general purpose programming language. The difference is that in Haskell, such definitions can be useful, which is to say, they can be used in terminating computations. Of course, you wouldn't want to simply evaluate infinity
:
> infinity
(S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S (S ...
^-C
Interrupted.
>
But this works:
> infinity == one
False
>
Why?! Let's trace it...
infinity == one
==> S infinity == S zero
==> infinity == zero
==> S infinity == Zero
==> False
You're all invited to try this in Java. Get back to me when it finishes :-).
Whether the inclusion of objects like infinity
in our NaturalNumber
type seems like a good idea or not, it is an inevitability, and this is one of the ways that computational systems can fail to exactly correspond to mathematical systems. Objects like infinity
are explicitly excluded from Peano-Dedekind Arithmetic by the induction axiom, which essentially says that all (Peano-Dedekind) natural numbers are grounded.
Adding infinity
has other costs, e.g., computed equality is no longer reflexive, and this can be a trap in naïve reasoning about code.
infinity == infinity
==> S infinity == S infinity
==> infinity == infinity
...
This leads to a high-speed twiddling of the CPUs thumbs, but rather than termination with the result True
. A general rule of thumb is that natural recursions are safe, i.e., they terminate, because all recursive calls are made to simpler arguments, and therefore a termination proof can be obtained by induction over the definition of the set of grounded values. The problem is that the grounded values of the domain aren't exhaustive.
Exercise 1 Haskell contains an Ord
typeclass, consisting of types with a natural trichotomous ordering. Provide an instance
declaration that adds NaturalNumber
to the Ord
typeclass.
The documentation for the Ord
typeclass can be found here, as well as on your system (assuming you've properly installed the Haskell Platform). The most satisfactory way to do this is by implementing the compare
function, which takes two arguments, and returns LT
, EQ
, or GT
, according as to whether the first argument is less than, equal to, or great than, the second argument, respectively.
To do this, complete the following definition
instance Ord NaturalNumber where
compare Zero Zero = EQ
...
Once you've done this, you should be able to do simple comparisons, e.g.,
> ten >= nine
True
Nonterminating computations are inevitable in any general-purpose programming language. The problem is not just infinity
, which is in some sense a natural “limit point” in the natural numbers, but also definitions that twiddle their thumbs without creating new constructors, e.g.,
loop :: NaturalNumber
loop = loop
Here, we provide a type ascription to loop
because there's not enough information otherwise for the compiler to figure it out. This may seem like a silly example, but it's easy to achieve the same effect accidentally, and often difficult to sort out when you do. Oddly enough, loop
can be used to define new natural numbers, and we can even compute with them:
> S loop == Zero
False
but
> loop == loop
...
never terminates. It might seem to be a defect of Haskell that we can't weed out "unintentional" values of type NaturalNumber
like the values of loop
and infinity
, but if so, it's a defect that all current and future programming languages must share, a consequence of the undecidability of the halting problem, one of the most fundamental theorems of computability theory, in essence a restatement of the Gödel's celebrated incompleteness theorem for Peano Arithmetic, which oddly enough is the subject of this lecture.
Addition and Multiplication
The principal operations of Peano-Dedekind arithmetic are addition and multiplication, which are recursively defined. This is most naturally accomplished by adding NaturalNumber
to Haskell's Num
typeclass. Doing this correctly requires implementing a number of functions, +
, *
, -
(or negate
), abs
, signnum
, and fromInteger
. We'll get the definitions of +
and *
directly from the Peano-Dedekind axioms.
instance Num NaturalNumber where
x + Zero = x
x + S y = S (x + y)
x * Zero = Zero
x * S y = x + x * y
Haskell will permit us to provide partial implementations of typeclasses, although not without complaint. Even so, this is a useful way to proceed, because it allows us to test our code incrementally.
> two * three + one
S (S (S (S (S (S (S Zero))))))
We'll next add an implementation of fromInteger
, along with a
simple helper function nat
:
instance Num NaturalNumber where
...
fromInteger n
| n > 0 = S (fromInteger (n-1))
| n == 0 = Zero
nat :: NaturalNumber -> NaturalNumber
nat = id
The definition of fromInteger
involves a programming new construct, guarded equations. The idea here is that we'll define a function by cases through a sequence of predicates (tests) and equations. The reduction of a term will evaluate each of the predicates in turn, until a predicate evaluates to True
. Once this happens, the corresponding equation is used. The point to this definition is that it consciously avoids defining fromInteger
on negative inputs, as this is meaningless in the context of the natural numbers.
The nat
function is just the identity, but with a restricted type. We can use nat
to force the interpretation of a literal like 1234
as a NaturalNumber
via the fromInteger
function.
The significance of these definitions is that they enable us to take advantage of Haskell's input processing. Consider a simple expression like:
2
What is the type of 2
? The answer is that it can belong to any
type that belongs to the Num
typeclass, although it defaults to Integer
(an “infinite-precision” integral type). By adding fromInteger
, we make expressions like this meaningful:
two + 27
In effect, the two
, which can only be a NaturalNumber
, forces both +
and 27
to be interpreted as a binary operator on NaturalNumber
and a NaturalNumber
respectively. The function nat
does the same thing. Here we see both a type declaration (of which much more later), and a definition. Using nat
enables us to force the interpretation of a numeric literal to be a NaturalNumber
. Thus, we can write
> nat (2 * 3 + 4)
...
As we'll see later, this forces the expression 2 * 3 + 4
to have the type NaturalNumber
, and this in turn will ultimately force 2
, 3
, and 4
to have type NaturalNumber
too.
...
S (S (S (S (S (S (S (S (S (S Zero)))))))))
We can write this with one fewer character via
> nat $ 2 * 3 + 4
S (S (S (S (S (S (S (S (S (S Zero)))))))))
Here, $
is a low precedence right associative infix operator for function application. It is a friend, a good friend, which can save us from the “lots of irritating single parentheses” that is sometimes used in a pejorative etymology of “Lisp.”
Exercise 2 Complete the definition of instance Num NaturalNumber
by implementing -
, abs
and signnum
in as meaningful a manner as possible. Note that for natural numbers, subtraction is truncated at Zero
, i.e., evaluating one - ten
should return Zero
.
Exercise 3 It's easy to use pattern matching to write simple number theoretic functions using pattern matching like this:
even Zero = True
even (S Zero) = False
even (S (S n)) = even n
but it's much more convenient to take advantage of the machinery we've built up, and write it like this:
even n
| n == 0 = True
| n == 1 = False
| otherwise = even (n-2)
and indeed, I tend to think of the first form as a translation of the second into the particular representation we used for NaturalNumber
, whereas the later is "more generic," and because it doesn't make assumptions about representations, works perfectly well for other representations.
Write such representation-independent implementations of odd
and remainder
. Note that because even
and odd
are defined in the Haskell Prelude, it's necessary to begin the Haskell source file that includes your definitions with an import statement that explicitly hides them:
import Prelude hiding (even,odd)
Laziness
Thus far, we've been modeling Haskell's evaluation mechanism via term rewriting. This is not exactly correct, but it will do for now.
Now, my friend Robby, an exquisite Scheme programmer, would instinctively re-write the definition of +
as follows:
x + Zero = x
x + S y = S x + y
Remember here that application binds more tightly than infix operations, so this change is from the original S (x + y)
to (S x) + y
in Robby's code.
If you ask Robby why, he'd talk about “tail recursion,” and about why the second form is tail recursive (and so runs in constant stack space), whereas the first form is not (and so involves putting a bunch of pending “apply S
to the result, and return” frames on the stack). And he'd be perfectly right. In Scheme. But Haskell is not Scheme, and Robby's definition involves a “loss of laziness” if carried over unreflectively from Scheme to Haskell. Let's see how that works.
Suppose we want to evaluate ten + ten == one
. If we use the original definition, we'd have
ten + ten == one
==> ten + S nine == S zero
==> S (ten + nine) == S zero
==> ten + nine == zero
==> ten + S eight == Zero
==> S (ten + eight) == Zero
==> False
In effect, we only needed to apply the successor case of the addition rule twice to prove that ten + ten /= one
. Whereas, with Robby's implementation, we'd have to do something like this:
ten + ten == one
==> ten + S nine == S zero
==> S ten + nine == S zero
==> S ten + S eight == S zero
==> S (S ten) + eight == S zero
==> S (S ten) + S seven == S zero
==> S (S (S ten)) + seven == S zero
... -- 13 steps elided
==> S (S (S (S (S (S (S (S (S (S ten))))))))) + Zero == S zero
==> S (S (S (S (S (S (S (S (S (S ten))))))))) == S zero
==> S (S (S (S (S (S (S (S (S ten)))))))) == zero
==> S (S (S (S (S (S (S (S (S ten)))))))) == Zero
==> False
Note that we got the same answer both times, but that Robby's implementation required that we completely reduce the addition, whereas the original implementation did not. This point can be driven even further. With the first definition, we can evaluate nine + infinity > ten
, and promptly obtain the result True
. With Robby's definition, this would run forever without returning a result.
Exercise 4 The axiom for the successor case of multiplication in the Wikipedia article is not what I remembered from graduate school. The crux of the matter is whether you consider S x = x + 1
, or S x = 1 + x
, and the Wikipedia article assumes the second. If we assume the first, as I remember it, the natural definition for the successor case of multiplication is subtly different:
x * S y = x * y + x
The form that I remember is better Haskell, in the sense that it's lazier. Illustrate this by a simple example, as above.
A complete version of today's program (and a reasonable starting point for today's exercises) is here: NaturalNumber.hs.