Lecture 23: Propositional Logic—Mathematical Preliminaries

Propositional Logic is a simple, formal means for capturing certain kinds of rigorous argument. Facility in propositional (and later, quantificational) reasoning is a foundational skill for programming, as well as a nice domain area for us to play in as programmers.

We assume the existence of an infinite (usually countable) set of atomic propositional variables $V$. Propositional formulae are expressions built out of propositional variables, and the boolean constants true and false, using the boolean operators of negation/not ($\lnot$), conjunction/and ($\land$), disjunction/or ($\lor$), and implication/implies ($\limplies$), e.g.

$$a \land (\lnot b \lor c) \limplies d$$

The parsing priority for these connectives is (from highest to lowest): $\lnot$, $\land$, $\lor$, and $\limplies$. It might be helpful to think of $\land$ as being analogous to multiplication, and $\lor$ to be analogous to addition. This analogy runs deeper than just in helping you understand how to parse boolean expressions: it also gives you some hints as to how to interpret them.

By convention, $\land$ and $\lor$ associate to the left (just like multiplication and addition), but since they're associative operators, it doesn't matter. Implies $(\limplies)$ is not associative. By convention it associates to the right, as in Haskell.

A valuation is a function from the set of propositional variables to the set of boolean values True ($\ltrue$) and False ($\lfalse$). We will often use the Greek letter $\nu$ (nu) to denote a valuation. We lift valuations to propositional formulae by describing truth-tables for the various propositional operators:

\[ \begin{array}{cccccc} a & b &\vert& \lnot a & a \lor b & a \land b & a \limplies b \\ \ltrue & \ltrue && \lfalse & \ltrue & \ltrue & \ltrue \\ \ltrue & \lfalse && \lfalse & \ltrue & \lfalse & \lfalse \\ \lfalse & \ltrue && \ltrue & \ltrue & \lfalse & \ltrue \\ \lfalse & \lfalse && \ltrue & \lfalse & \lfalse & \ltrue \\ \end{array} \]

Thus, e.g., if $\nu(a) = \ltrue$, and $\nu(b) = \lfalse$, we can compute $\nu(a \lor b \limplies a \land b) = \lfalse$. This is often done via a table, where we write put a $\ltrue$ or an $\lfalse$ under each connective, as appropriate, e.g.,

\[ \begin{array}{cccccccccc} a&b&\vert&a&\lor&b&\limplies&a&\land&b\\ \ltrue&\lfalse&&\ltrue&&\lfalse&&\ltrue&&\lfalse\\ &&&&\ltrue&&&&\lfalse&\\ &&&&&&\lfalse\\ \end{array} \]

or in a more compressed

\[ \begin{array}{cccccccccc} a&b&\vert&a&\lor&b&\limplies&a&\land&b\\ \ltrue&\lfalse&&\ltrue&\ltrue&\lfalse&\lfalse&\ltrue&\lfalse&\lfalse\\ \end{array} \]

A propositional formula $\varphi$ (phi) is a tautology if for all valuations $\nu$, $\nu(\varphi) = \ltrue$. A formula $\varphi$ is satisfiable if there exists a valuation $\nu$ such that $\nu(\varphi) = \ltrue$. Note that $\varphi$ is a tautology if and only if $\lnot\varphi$ is not satisfiable. The question of how difficult, in general, it is to determine whether or not a given propositional formula is satisfiable is a complete instance of the P vs. NP problem, one of the grand open problems of Computer Science.

Logicians care about the notion of a sound argument. In particular, let's suppose $H$ is a set of hypotheses, and $\varphi$ is a purported conclusion. Logicians want to define a notion of proof, which is a formal analog to the notion of a sound argument from rhetoric. They would write $H \proves \varphi$ to mean that $\varphi$ is a provable consequence of $H$.

The theory of provability for classical propositional logic has been completely worked out, although there are some very deep computational questions (in the plural) that lurk here.

Definition of Proof: Let $H$ denote a set of propositional formulae, and $\varphi$ denote a single propositional formula. An $H$-proof of $\varphi$ is a sequence $\varphi_1$, $\varphi_2$, ..., $\varphi_n$ of propositional formula such that

  1. for each $\varphi_i$, either
    1. $\varphi_i$ is an element of $H$, in which case we say that $\varphi_i$ is a hypothesis or assumption,
    2. $\varphi_i$ is a tautology, or
    3. there exist $j,k \lt i$ such that $\varphi_j \equiv \varphi_k \limplies \varphi_i$, in which case, we say that $\varphi_i$ follows by modus ponens;
  2. $\varphi_n = \varphi$.

Note that in many presentations of the definition of a proof, item 1.b is further restricted to being a substitution instance of one of a finite set of tautological schemas.

Note that if $\proves \varphi$, i.e., we can prove $\varphi$ without hypotheses, then $\varphi$ is a tautology. This follows from a simple induction on proof length.

Also note that what's happening with modus ponens looks a lot like a type calculation. This is no accident, although working this connection out in detail involves a digression into the intuitionistic propositional calculus, which we will forego, but hint at a lot.

One of the great virtues of functional programming languages is that they can deal with systems like this fairly easily, so it is no big deal to write a propositional proof-checker in a functional language like Haskell. It's a bit more of a challenge to write a proof-generator, though, which is one of the major intended applications of Haskell. But before we get into coding, let's work on developing some skill in proving theorems.

The Deduction Theorem is a tremendously useful meta-principle: if $H,\varphi \proves \psi$, then $H \proves \varphi \limplies \psi$. The proof of the Deduction Theorem by induction on proof length, and is routinely covered in an introductory logic class. The Deduction Theorem often feels like cheating: not only do we end up with a simpler statement to prove, we have more hypotheses with which to prove it.

The converse of the Deduction Theorem, i.e., if $H \proves \varphi \limplies \psi$ then $H,\varphi \proves \psi$, follows immediately from modus ponens and from the monotonicity of $\proves$ with respect to the hypothesis set.

Short example: $\proves \alpha \limplies \beta \limplies \alpha$.

This is a tautology, but it's easier to prove than to verify:

  1. $\alpha,\beta \proves \alpha$, hypothesis
  2. $\alpha \proves \beta \limplies \alpha$, deduction theorem
  3. $\proves \alpha \limplies \beta \limplies \alpha$, deduction theorem

This style of reasoning, using just hypothesis, the deduction theorem, and modus ponens, is called natural deduction, and it is very powerful.

Now, it is not entirely irrelevant that const :: a -> b -> a in Haskell, and the type of the (pure) function has essentially the form of the tautology we've just proven. Indeed, $\alpha \limplies \beta \limplies \alpha$ is an intuitionistic/constructive tautology, and all intuitionistic tautologies are classical tautologies (but not conversely!!).

*Exercise 23.1. Give a natural deduction proof of $(\alpha \limplies \beta) \limplies (\beta \limplies \gamma) \limplies (\alpha \limplies \gamma)$. What standard Haskell function has a similar type?

A key problem in the definition of a proof given above is “how do we know that a formula $\varphi$ is a tautology?” Basically, we have two choices at this point:

  1. We can evaluate it for all valuations (note that we need only consider partial valuations, which are defined on all of the variables that occur within $\varphi$, a set has size $2^n$ if $n$ distinct variables occur in $\varphi$).
  2. We can prove it without hypotheses.

Quine's method is often a useful third alternative. In Quine's method, we chose a one variable $x$, and we create two new formulae $\varphi[x:=\ltrue]$ and $\varphi[x:=\lfalse]$, substituting $\ltrue$ and $\lfalse$ for $x$ in $\varphi$. If both of these formulae are tautologies (facts that we can use Quine's method to verify), then the original formula is a tautology too. If either of this formulae is not a tautology, then the original formula isn't either.

What makes Quine's method so useful is that we can often greatly simplify the formulae $\varphi[x:=\ltrue]$ and $\varphi[x:=\lfalse]$, using a few simple rules based on the truth tables above:

We'll often add the rule

as well.

We begin with a simple example.

Example: Show $\varphi \equiv \alpha \limplies \beta \limplies \alpha$ is a tautology by Quine's method.

We begin by splitting on $\alpha$. This gives us two formulae: $\varphi[\alpha:=\ltrue] \equiv \ltrue \limplies \beta \limplies \ltrue$, and $\varphi[\alpha:=\lfalse] \equiv \lfalse \limplies \beta \limplies \lfalse$. The first of these reduces to $\ltrue \limplies \ltrue$, and thence to $\ltrue$; the second reduces directed directly to $\ltrue$. Both are tautologies, and therefore $\alpha \limplies \beta \limplies \alpha$ is also a tautology.

Our next example is a bit more complicated.

Example: Show $\varphi \equiv (\alpha \lor \beta) \limplies (\alpha \limplies \gamma) \limplies (\beta \limplies \gamma) \limplies \gamma$ is a tautology by Quine's method.

We begin by splitting on $\alpha$.

  1. $\phi[\alpha:=\ltrue]$. We have \[ \begin{array}{lcl} \phi[\alpha:=\ltrue] & \equiv & (\ltrue \lor \beta) \limplies (\ltrue \limplies \gamma) \limplies (\beta \limplies \gamma) \limplies \gamma\\ & \equiv &\ltrue \limplies \gamma \limplies (\beta \limplies \gamma) \limplies \gamma\\ & \equiv &\gamma \limplies (\beta \limplies \gamma) \limplies \gamma\\ \end{array} \] This is a substitution instance of the known tautology $\alpha \limplies \beta \limplies \alpha$, and so must also be a tautology.
  2. $\phi[\alpha:=\lfalse]$. We have \[ \begin{array}{lcl} \phi[\alpha:=\lfalse] & \equiv & (\lfalse \lor \beta) \limplies (\lfalse \limplies \gamma) \limplies (\beta \limplies \gamma) \limplies \gamma\\ & \equiv & \beta \limplies \ltrue \limplies (\beta \limplies \gamma) \limplies \gamma\\ & \equiv & \beta \limplies (\beta \limplies \gamma) \limplies \gamma\\ \end{array} \] This is also a tautology, it's basically modus ponens expressed as a formula. But for the sake of completeness, let's go ahead and show that it's a tautology by splitting on $\beta$. Let $\psi=\beta \limplies (\beta \limplies \gamma) \limplies \gamma$
    1. $\psi[\beta:=\ltrue]$. \[ \begin{array}{lcl} \psi[\beta:=\lfalse] & \equiv & \ltrue \limplies (\ltrue \limplies \gamma) \limplies \gamma\\ &\equiv&\gamma \limplies \gamma\\ \end{array} \] and $\gamma\limplies\gamma$ is a familiar tautology.
    2. $\psi[\beta:=\lfalse]$.
    3. \[ \begin{array}{lcl} \psi[\beta:=\lfalse] & \equiv & \lfalse \limplies (\lfalse \limplies \gamma) \limplies \gamma\\ &\equiv&\ltrue\\ \end{array} \] and $\ltrue$ is the ultimate tautology.

*Exercise 23.2 The formula we just proved, $\varphi \equiv (\alpha \lor \beta) \limplies (\alpha \limplies \gamma) \limplies (\beta \limplies \gamma) \limplies \gamma$, is also similar to the type of a Haskell Prelude function. What function is it? Hint: What standard type is most analogous to $\lor$?

There are a number of standard tautologies which are worth committing to memory:

*Exercise 23.3. Prove the Sherlock Holmes syllogism by Quine's method.

There is a crucial connection between provability and truth. We write $\Gamma \models \alpha$ if for all valuations $\nu$, if $\nu(\gamma) = \ltrue$ for all $\gamma \in \Gamma $, then $\nu(\alpha) = \ltrue$ also, i.e., $\Gamma \models \alpha$ means that whenever the hypotheses are all true, then the conclusion is also true.

There are two crucial theorems, which together show that provability $\proves$ and entailment $\models$ are equivalent:

Proving these theorems is beyond this course (in the sense of subject, not preparation).

We write $\alpha \equiv \beta$, if $\alpha$ and $\beta$ are semantically equivalent, i.e., if $\alpha \models \beta$ and $\beta \models \alpha$. By the equivalence of $\models$ and $\proves$, this also means that $\alpha$ and $\beta$ are provably equivalent, and so we'll often refer to this relation as simply “equivalent” henceforth.

Theorem (Referential Transparency): If $\alpha_i \equiv \beta_i$, for $i \in {1,2,...,k}$, then

  1. $\gamma[x_1:=\alpha_1,x_2:=\alpha_2,...,x_k:=\alpha_k] \equiv \gamma[x_1:=\beta_1,x_2:=\beta_2,...,x_k:=\beta_k]$, and
  2. If $\Gamma \proves \gamma$, then $\Gamma[x_1:=\alpha_1,x_2:=\alpha_2,...,x_k:=\alpha_k] \proves \gamma[x_1:=\alpha_1,x_2:=\alpha_2,...,x_k:=\alpha_k]$.

This is often used in the case where $\Gamma$ is empty, i.e., every substitution instance of a tautology is also a tautology. We saw this kind of reasoning before when we claimed $\gamma \limplies (\beta \limplies \gamma) \limplies \gamma$ is a substitution instance of $\alpha \limplies \beta \limplies \alpha$ (which we can now make precise via the substitution $[\alpha:=\gamma,\beta:=\beta \limplies \gamma]$).

This makes our life a lot easier, because it often enables us to work with complicated formulae by replacing various subformulae with equivalent subformulae.

There are a few equivalences that are crucially important to know, and often get used in applications of referential transparency.

  1. Double negation elimination: $\alpha \equiv \lnot\lnot\alpha$,
  2. de Morgan's laws:
    1. $\alpha \lor \beta \equiv \lnot(\lnot \alpha \land \lnot\beta)$, and
    2. $\alpha \land \beta \equiv \lnot(\lnot \alpha \lor \lnot\beta)$;
  3. Contraposition: $\alpha \limplies \beta \equiv \lnot\beta \limplies \lnot\alpha$, and
  4. The definition of implication: $\alpha \limplies \beta \equiv \lnot\alpha \lor \beta$.