# CS代考 Haskell First Order Logic – cscodehelp代写

First Order Logic

COMP1600 / COMP6260

Australian National University

Semester 2, 2021

First Order Natural Deduction: Example

1 ∀x(elephant(x) → happy(x))

2 elephant(Appu)

3 elephant(Appu) → happy(Appu)

4 happy(Appu)

∀-E, 1 →-E, 2, 3

2/50

Natural deduction in first-order logic

Proof rules for propositional natural deduction + quantifier rules: ∀-E universal elimination;

∀-I universal introduction;

∃-E existential elimination;

∃-I existential introduction;

Proof in first-order logic is usually based on these rules, together with the rules for propositional logic.

3/50

Elimination

∀-E (universal elimination)

∀x. P(x) P(a)

n ∀x. Fish(x) → HasFins(x) . .

m Fish(a) → HasFins(a)

∀-E, n

If a predicate is true for all members of a domain, then it is also true for a specific one (a must be a member of the domain).

4/50

Introduction

∀-I (universal introduction) n a.

P(a) (a arbitrary, a variable) ∀x. P(x)

.

m

m + 1

.

Cat(a) → EatsFish(a)

∀x. Cat(x) → EatsFish(x)

The a on the left of the bar is a guard which reminds us that:

this variable is local to the inner derivation, and

it cannot be free in an assumption

It is like an “assumption” that a is an arbitrary member of the domain.

That is, the proof from lines n to m must work for anything in place of a.

5/50

Free and bound variables

Bound: Every occurrence of variable x in ∀ x. p(x) and in ∃ x. p(x) is bound.

Free: Every occurrence of a variable that is not bound is free. Example.

(∀x. x+y=y+x) ∧ (∀x. ∃y. y>x+z) Q. Which occurrences of variables are free and which are bound?

A. All occurrences of x are bound; none of z are; and just the last 2 occurrences of y are bound.

Hence the instance of z is free, as are the first two occurrences of y.

6/50

Breaching the arbitrariness requirement

When we generalise for a variable a, the same proof steps must be possible for all members of the domain.

1 (Cat(kitty) → HasFur(kitty)) ∧ Cat(kitty)

2 Cat(kitty) → HasFur(kitty)

3 Cat(kitty)

4 HasFur(kitty)

∧-E, 1 ∧-E, 1 →-E, 2, 3 ∀-I, 4

5 ∀x . HasFur(x)

WRONG

WRONG because kitty appears in an assumption (step 1) (and step 4 is still in the scope of that assumption)

7/50

Example

(∀x∀yP(x,y)) ↔ (∀y∀xP(x,y))

1 2 3 4 5

∀x∀yP(x,y)

b a ∀yP(a,y)

P(a,b) ∀xP(x,b)

∀-E, 1 ∀-E, 2 ∀-I, 3 ∀-I, 4

∀y∀xP(x,y)

Exercise: also show the reverse to get equivalence, ↔

8/50

Example

(∀x∀yP(x,y)) ↔ (∀y∀xP(x,y))

1 2 3 4 5

∀x∀yP(x,y)

b a ∀yP(a,y)

P(a,b) ∀xP(x,b)

∀-E, 1 ∀-E, 2 ∀-I, 3 ∀-I, 4

∀y∀xP(x,y)

Exercise: also show the reverse to get equivalence, ↔

8/50

Example

(∀x∀yP(x,y)) ↔ (∀y∀xP(x,y))

1 2 3 4 5

∀x∀yP(x,y)

b a ∀yP(a,y)

P(a,b) ∀xP(x,b)

∀-E, 1 ∀-E, 2 ∀-I, 3 ∀-I, 4

∀y∀xP(x,y)

Exercise: also show the reverse to get equivalence, ↔

8/50

(∀x∀yP(x,y)) ↔ (∀y∀xP(x,y))

1 2 3 4 5

∀x∀yP(x,y)

b a ∀yP(a,y)

P(a,b) ∀xP(x,b)

∀-E, 1 ∀-E, 2 ∀-I, 3 ∀-I, 4

∀y∀xP(x,y)

Exercise: also show the reverse to get equivalence, ↔

8/50

Introduction

∃-I (existential introduction)

n Dog(fido) . .

m ∃xDog(x)

∃-I, n

P(a) ∃xP(x)

9/50

An invalid argument

This argument is invalid if the domain is empty.

1 ∀xP(x)

2 P(a) ∀-E, 1

3 ∃xP(x) ∃-I, 2

Which step is invalid ??

10/50

Elimination

[P (a)] .

∃xP(x) q (a arbitrary, a variable) q (a not free in q)

Rationale: if P(x) holds for some individual x,

let that individual be called a (so P(a) holds) prove that q follows

as q doesn’t involve our choice of a,

q holds regardless of which individual has P true

The proof of q from P(a) must work for any individual in place of a

11/50

Example

Prove

∃xElephant(x) ∀xElephant(x) → Huge(x) ∃x Huge(x )

1 2 3 4 5 6 7

a

∃x Elephant(x ) ∀xElephant(x) → Huge(x)

Elephant(a) Elephant(a) → Huge(a) Huge(a)

∃xHuge(x)

∃xHuge(x)

∀-E, 2 →-E, 3, 4 ∃-I, 5

∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such that Elephant(x), assume that individual is a

12/50

Example

Prove

∃xElephant(x) ∀xElephant(x) → Huge(x) ∃x Huge(x )

1 2 3 4 5 6 7

a

∃x Elephant(x ) ∀xElephant(x) → Huge(x)

Elephant(a) Elephant(a) → Huge(a) Huge(a)

∃xHuge(x)

∃xHuge(x)

∀-E, 2 →-E, 3, 4 ∃-I, 5

∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such that Elephant(x), assume that individual is a

12/50

Example

Prove

∃xElephant(x) ∀xElephant(x) → Huge(x) ∃x Huge(x )

1 2 3 4 5 6 7

a

∃x Elephant(x ) ∀xElephant(x) → Huge(x)

Elephant(a) Elephant(a) → Huge(a) Huge(a)

∃xHuge(x)

∃xHuge(x)

∀-E, 2 →-E, 3, 4 ∃-I, 5

∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such that Elephant(x), assume that individual is a

12/50

Prove

∃xElephant(x) ∀xElephant(x) → Huge(x) ∃x Huge(x )

1 2 3 4 5 6 7

a

∃x Elephant(x ) ∀xElephant(x) → Huge(x)

Elephant(a) Elephant(a) → Huge(a) Huge(a)

∃xHuge(x)

∃xHuge(x)

∀-E, 2 →-E, 3, 4 ∃-I, 5

∃-E, 1, 3–6

The notation reflects an assumption: since there is some individual x such that Elephant(x), assume that individual is a

12/50

Swapping the order of existential quantifiers

(∃x∃yP(x,y)) ↔ (∃y∃xP(x,y))

1 2 3 4 5 6 7

∃x∃yP(x,y) a ∃yP(a,y)

b P(a,b) ∃xP(x,b)

∃y∃xP(x,y) ∃y∃xP(x, y)

∃-I, 3

∃-I, 4

∃-E, 2, 3–5 ∃-E, 1, 2–6

∃y∃xP(x, y)

Exercise: also show the converse to get equivalence, ↔

13/50

Swapping the order of existential quantifiers

(∃x∃yP(x,y)) ↔ (∃y∃xP(x,y))

1 2 3 4 5 6 7

∃x∃yP(x,y) a ∃yP(a,y)

b P(a,b) ∃xP(x,b)

∃y∃xP(x,y) ∃y∃xP(x, y)

∃-I, 3

∃-I, 4

∃-E, 2, 3–5 ∃-E, 1, 2–6

∃y∃xP(x, y)

Exercise: also show the converse to get equivalence, ↔

13/50

Swapping the order of existential quantifiers

(∃x∃yP(x,y)) ↔ (∃y∃xP(x,y))

1 2 3 4 5 6 7

∃x∃yP(x,y) a ∃yP(a,y)

b P(a,b) ∃xP(x,b)

∃y∃xP(x,y) ∃y∃xP(x, y)

∃-I, 3

∃-I, 4

∃-E, 2, 3–5 ∃-E, 1, 2–6

∃y∃xP(x, y)

Exercise: also show the converse to get equivalence, ↔

13/50

(∃x∃yP(x,y)) ↔ (∃y∃xP(x,y))

1 2 3 4 5 6 7

∃x∃yP(x,y) a ∃yP(a,y)

b P(a,b) ∃xP(x,b)

∃y∃xP(x,y) ∃y∃xP(x, y)

∃-I, 3

∃-I, 4

∃-E, 2, 3–5 ∃-E, 1, 2–6

∃y∃xP(x, y)

Exercise: also show the converse to get equivalence, ↔

13/50

Swapping the order of existential and universal quantifiers

Proof of

∃x∀yP(x, y) ∀y∃xP(x, y)

1 2 3 4 5 6

∃x∀yP(x,y)

b a ∀yP(a,y)

P(a,b)

∃xP(x,b) ∃xP(x, b)

∀y∃xP(x,y)

∀-E, 2

∃-I, 3

∃-E, 1, 2–4 ∀-I, 5

14/50

Swapping the order of existential and universal quantifiers

Proof of

∃x∀yP(x, y) ∀y∃xP(x, y)

1 2 3 4 5 6

∃x∀yP(x,y)

b a ∀yP(a,y)

P(a,b)

∃xP(x,b) ∃xP(x, b)

∀y∃xP(x,y)

∀-E, 2

∃-I, 3

∃-E, 1, 2–4 ∀-I, 5

14/50

Swapping the order of existential and universal quantifiers

Proof of

∃x∀yP(x, y) ∀y∃xP(x, y)

1 2 3 4 5 6

∃x∀yP(x,y)

b a ∀yP(a,y)

P(a,b)

∃xP(x,b) ∃xP(x, b)

∀y∃xP(x,y)

∀-E, 2

∃-I, 3

∃-E, 1, 2–4 ∀-I, 5

14/50

Proof of

∃x∀yP(x, y) ∀y∃xP(x, y)

1 2 3 4 5 6

∃x∀yP(x,y)

b a ∀yP(a,y)

P(a,b)

∃xP(x,b) ∃xP(x, b)

∀y∃xP(x,y)

∀-E, 2

∃-I, 3

∃-E, 1, 2–4 ∀-I, 5

14/50

Swapping the order of existential and universal quantifiers

∃x∀yP(x, y)

∀y∃xP(x, y)

We got the previous proof by first looking at the goal, (∀y. …), so using

∀-I. Here we first look at what we have, (∃x. …), and so use ∃-E.

Another proof of

1 2 3 4 5 6

∃x∀yP(x,y) a ∀yP(a,y)

b P(a,b) ∃xP(x,b)

∀y∃xP(x,y) ∀y∃xP(x, y)

∀-E, 2

∃-I, 3

∀-I, 4

∃-E, 1, 2–5

15/50

Swapping the order of existential and universal quantifiers

∃x∀yP(x, y)

∀y∃xP(x, y)

We got the previous proof by first looking at the goal, (∀y. …), so using

∀-I. Here we first look at what we have, (∃x. …), and so use ∃-E.

Another proof of

1 2 3 4 5 6

∃x∀yP(x,y) a ∀yP(a,y)

b P(a,b) ∃xP(x,b)

∀y∃xP(x,y) ∀y∃xP(x, y)

∀-E, 2

∃-I, 3

∀-I, 4

∃-E, 1, 2–5

15/50

Swapping the order of existential and universal quantifiers

∃x∀yP(x, y)

∀y∃xP(x, y)

We got the previous proof by first looking at the goal, (∀y. …), so using

∀-I. Here we first look at what we have, (∃x. …), and so use ∃-E.

Another proof of

1 2 3 4 5 6

∃x∀yP(x,y) a ∀yP(a,y)

b P(a,b) ∃xP(x,b)

∀y∃xP(x,y) ∀y∃xP(x, y)

∀-E, 2

∃-I, 3

∀-I, 4

∃-E, 1, 2–5

15/50

∃x∀yP(x, y)

∀y∃xP(x, y)

We got the previous proof by first looking at the goal, (∀y. …), so using

∀-I. Here we first look at what we have, (∃x. …), and so use ∃-E.

Another proof of

1 2 3 4 5 6

∃x∀yP(x,y) a ∀yP(a,y)

b P(a,b) ∃xP(x,b)

∀y∃xP(x,y) ∀y∃xP(x, y)

∀-E, 2

∃-I, 3

∀-I, 4

∃-E, 1, 2–5

15/50

Can quantifiers always be swapped?

∃x∀yEats(x, y) → There is an animal

that can eat all foods.

∀y∃xEats(x, y) → All foods can be eaten

by some animal.

∀y∃xEats(x, y)

All foods can be eaten by some animal.

∃x∀yEats(x, y) There is an animal that can eat all foods.

Is this second version true? Try to prove it. What happens?

16/50

The “quantifier negation” equivalence

Prove

¬(∃x. P(x)) ∀x. ¬P(x)

1 ¬(∃x. P(x)) 2a P(a)

(∀x. ¬P(x)) ↔ ¬(∃x. P(x))

3 4 5 6

∃x. P(x)

F ¬P(a) ∀x. ¬P(x)

∃-I, 2 ¬E, 1, 3 ¬I, 2–4 ∀-I, 5

17/50

The “quantifier negation” equivalence

Prove

¬(∃x. P(x)) ∀x. ¬P(x)

1 ¬(∃x. P(x)) 2a P(a)

(∀x. ¬P(x)) ↔ ¬(∃x. P(x))

3 4 5 6

∃x. P(x)

F

¬P(a) ∀x. ¬P(x)

∃-I, 2 ¬E, 1, 3 ¬I, 2–4 ∀-I, 5

17/50

The “quantifier negation” equivalence

Prove

¬(∃x. P(x)) ∀x. ¬P(x)

1 ¬(∃x. P(x)) 2a P(a)

(∀x. ¬P(x)) ↔ ¬(∃x. P(x))

3 4 5 6

∃x. P(x)

F ¬P(a) ∀x. ¬P(x)

∃-I, 2 ¬E, 1, 3 ¬I, 2–4 ∀-I, 5

17/50

Prove

¬(∃x. P(x)) ∀x. ¬P(x)

1 ¬(∃x. P(x)) 2a P(a)

(∀x. ¬P(x)) ↔ ¬(∃x. P(x))

3 4 5 6

∃x. P(x)

F ¬P(a) ∀x. ¬P(x)

∃-I, 2 ¬E, 1, 3 ¬I, 2–4 ∀-I, 5

17/50

The “quantifier negation” equivalence

Proof of the converse:

1 2 3 4 5 6 7

∀x. ¬P(x) ∃x. P(x)

a P(a) ¬P(a)

F F

¬(∃x. P(x))

∀-E, 1

¬E, 3, 4 ∃-E, 2, 3–6 ¬I, 2–7

18/50

The “quantifier negation” equivalence

Proof of the converse:

1 2 3 4 5 6 7

∀x. ¬P(x) ∃x. P(x)

a P(a) ¬P(a)

F

F

¬(∃x. P(x))

∀-E, 1

¬E, 3, 4 ∃-E, 2, 3–6 ¬I, 2–7

18/50

The “quantifier negation” equivalence

Proof of the converse:

1 2 3 4 5 6 7

∀x. ¬P(x) ∃x. P(x)

a P(a) ¬P(a)

F F

¬(∃x. P(x))

∀-E, 1

¬E, 3, 4 ∃-E, 2, 3–6 ¬I, 2–7

18/50

Proof of the converse:

1 2 3 4 5 6 7

∀x. ¬P(x) ∃x. P(x)

a P(a) ¬P(a)

F F

¬(∃x. P(x))

∀-E, 1

¬E, 3, 4 ∃-E, 2, 3–6 ¬I, 2–7

18/50

Again: Two sides of (the same?) Coin

Validity. A formula is valid (in all structures). Provability. A formula is provable (via natural deduction).

Recall propositional Logic

a formula is provable in natural deduction iff it’s true for all truth-value assignments

Soundness. All provable formulae are valid.

(As application of proof rules maintains validity of formulae)

Completeness. All valid formulae are provable (Difficult proof, via so-called “Henkin Models”.)

Soundness and completeness is the glue between valid and provable.

19/50

Metalogic of first order logic

First order natural deduction is sound and complete So you can find a proof of any valid statement

But the truth-tables aren’t finite — you can’t actually prove or disprove a statement by truth-tables

If there is a proof you can find it by mindlessly trying all sequences of rules of length 1, length 2, . . .

But if you don’t find a proof, you haven’t established anything

Small Gain.

checking for validity in all models discounts infinite models trying all proofs may yield a proof.

First order logic is semi-decidable (later in the course)

20/50

Structural Induction

So Far.

the “mechanics” of reasoning fully generic, applies to all sets

Now. Induction Principles

allow us to prove properties about “special” sets

can be thought of as additional axioms to natural deduction but will leave natural deduction implicit from now on.

In more detail.

Induction on the natural numbers: review Structural induction over Lists

Structural induction over Trees

The principle that: the structural induction rule for a particular data type follows from its definition

21/50

Natural Number Induction

This is the induction you already know.

To prove a property P for all natural numbers: Prove it for 0

Prove that, if it is true for n it is true for n+1. The principle is usually expressed as a rule of inference:

P(0) ∀n.P(n) → P(n + 1) ∀n.P (n)

It is an additional principle that allows us to prove facts.

22/50

Why does it Work?

The natural numbers are an inductively defined set: 1. 0 is a natural number;

2. If n is a natural number, so is n+1;

No object is a natural number unless justified by these clauses. From the assumptions:

P(0) ∀n. P(n) → P(n + 1) we get a sequence of deductions:

P(0), P(1), P(2), P(3), …

which justifies the conclusion for any n you choose: P(0) is given

obtain P(0) → P(1) by (∀E), and then get P(1) using (→ E) obtain P(2), P(3), . . . in the same way.

23/50

Example of Mathematical Induction

Let’s prove this property of natural numbers:

n n×(n+1) ◦◦◦••

i= 2 First the Base case P(0)

◦◦••• ◦••••

i=0

i=0

This is obviously true because both sides equal 0

0

0 × (0 + 1) i=2

◦◦◦◦•

24/50

The Step Case

The step case. is of the of form ∀n.P(n) → P(n + 1). Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1a P(a)

2 horrendous proof 3 P(a+1)

4 P(a) → P(a + 1)

5 ∀n.P(n) → P(n + 1)

→-I, 1–3 ∀-I, 3

25/50

The Step Case

The step case. is of the of form ∀n.P(n) → P(n + 1). Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1a P(a)

2 horrendous proof 3 P(a+1)

4 P(a) → P(a + 1)

5 ∀n.P(n) → P(n + 1)

→-I, 1–3 ∀-I, 3

25/50

The Step Case

The step case. is of the of form ∀n.P(n) → P(n + 1). Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1a P(a)

2 horrendous proof 3 P(a+1)

4 P(a) → P(a + 1)

5 ∀n.P(n) → P(n + 1)

→-I, 1–3 ∀-I, 3

25/50

The step case. is of the of form ∀n.P(n) → P(n + 1). Q. How do we prove a formula of this form?

A1. How would we do it in natural deduction?

1a P(a)

2 horrendous proof 3 P(a+1)

4 P(a) → P(a + 1)

5 ∀n.P(n) → P(n + 1)

→-I, 1–3 ∀-I, 3

25/50

The Step Case, Again

The step case. is of the of form ∀n.P(n) → P(n + 1).

Q. How do we prove a formula of this form?

A2. What the natural deduction proof really means pick an arbitrary variable a

assume that P(a) and prove P(a + 1)

this amounts to P(a) → P(a + 1)

as a was arbitrary, this amounts to ∀n.P(n) → P(n + 1)

26/50

How the Step Case Plays Out

Recall. Want to prove ∀n.

Step case.

n n×(n+1) n+1 (n+1)×(n+1+1) ∀n.i= 2 →i= 2

i=0 i=0 Let a be arbitrary and assume P(a), i.e.

a a × (a + 1) (IH) i= 2

i=0

P (a)

The assumption (IH) is called the induction hypothesis. Need to use it to prove P(a + 1).

n n × (n + 1)

i = 2

i=0

P(n)

27/50

Step Case – Detailed Proof

Assume P(a), that is Prove P(a+1), that is,

a i=0

a × (a + 1) 2 .

(a+1)× ((a+1)+1) i = 2 .

a+1 a

i = i + (a + 1)

i =

a+1 i=0

i=0 i=0

=a × (a+1) + (a+1)

(byIH)

2

= a × (a + 1) + 2 × (a + 1)

22 = (a+2) × (a+1)

2

= (a+1) × (a+2)

2

28/50

Wrapping up the proof

Recall. Proof rule for induction over natural numbers: P(0) ∀n.P(n) → P(n + 1)

∀n.P (n)

We have proved both premises of the induction rule P(0) and ∀n.P(n) → P(n + 1)

so that applying the rule gives ∀n.P(n).

We have demonstrated this for a particular P(n), i.e.

n n × (n + 1) P(n)= i= 2

i=0

in both the base case and the induction step.

29/50

Back to Programs

Q. How would we implement summation, e.g. in Haskell?

A. For example, like so:

sfz :: Int -> Int

sfz 0 = 0

sfz n = n + sfz (n-1)

Slogan.

Similarity to induction proofs

prove that ∀n.P(n)

prove that P(0)

prove that P(a) → P(a + 1)

Recursive definitions ≈ inductive proofs

30/50

Example: Proofs about a Program

Given. The definition of the program, in our case:

sfz :: Int -> Int

sfz 0 = 0 — SFZ0

sfz n = n + sfz (n-1) — SFZ1

Goal. To prove that ∀n.sfz(n) = 12 (n × (n + 1)). Strategy. Induction on n.

Base Case.

sfz(0)=0= 12(0×(0+1)) (by SFZ0)

31/50

Example: Proofs about a Program

Given. The definition of the program, in our case:

sfz :: Int -> Int

sfz 0 = 0 — SFZ0

sfz n = n + sfz (n-1) — SFZ1

Step Case. Pick an arbitrary a and assume

(IH) sfz a = 12(a × (a + 1))

Goal. Show that sfz(a + 1) = 12((a + 1)(a + 1 + 1)).

sfz(a+1)=(a+1)+sfz(a+1−1) = (a + 1) + sfz(a)

= (a + 1) + 12(a × (a + 1)) = 12 ((a + 1) × (a + 1 + 1))

(by SFZ1)

(by arithmetic)

(by IH)

(arithmetic, see before)

32/50

Basic Anatomy of an Induction Proof

Base Case (n = 0). usually trivial

uses base case of recursive definition

Step Case a → a + 1

assume the property for an arbitrary a – the induction hypothesis (IH) massage the goal (the property for a + 1) so that (IH) applies

this usually uses the recursive step in the definition

apply (IH) to prove the step case – the property for a + 1

Justification.

simple facts (e.g. arithmetic) can be justified by saying just that applied equations need to be justified explicitly.

33/50

Why do we care?

Program Correctness.

have formal proof that a function computes what it should function is operational whereas property is descriptive

two different angles on the same function.

Optimisation.

given: slow implementation of a function – say slow

hypothesis: faster implementation – say fast does the same job proof of ∀n.slow(n) = fast(n) allows us to swap slow for fast

34/50

Another Example

Given.

sumodd :: Int -> Int

sumodd 0 = 0

sumodd n = (2 * n – 1) + sumodd (n-1)

Q. What does this function do?

Answer. It computes the square of n, for n ≥ 0.

35/50

Inductive Proof of sumodd Given.

sumodd :: Int -> Int

sumodd 0 = 0 — SO1 sumoddn=(2*n-1)+sumodd(n-1) –SO2

Goal. ∀n.sumodd n = n2.

Base Case. Show that sumodd 0 = 02.

sumodd 0 = 0 = 02 (by SO1 and arithmetic)

36/50

Inductive Proof of sumodd Given.

sumodd :: Int -> Int

sumodd 0 = 0 — SO1 sumoddn=(2*n-1)+sumodd(n-1) –SO2

Step Case.

assume (IH) : sumodd a = a2

prove that sumodd (a + 1) = (a + 1)2.

sumodd(a+1)=2∗(a+1)−1+sumodd(a+1−1) (bySO2)

= 2a + 1 + sumodd (a) = 2a + 1 + a2

= (a + 1)2

(arithmetic) (by IH) (arithmetic)

37/50

Optimisation Example: Towers of Hanoi

Rules.

three poles with disks of varying sizes

larger disks may never be on top of smaller ones may only move one disk at a time.

Q. How many moves to get n discs from pole one to pole 3?

A. Here’s a program!

t :: Int -> Int

t0=0

t n = t (n-1) + 1 + t (n-1)

38/50

Critique 1: This is super inefficient

Compare the two programs:

t :: Int -> Int

t 0 = 0

t n = t (n-1) + 1 + t (n-1)

Clearly the left one is bogged down by two identical recursive calls! Show that ∀a.t (a) = tb (a) by induction:

BaseCase. t(0)=0=tb(0)

StepCase. Ift(a)=tb(a),thent(a+1)=tb(a+1):

t (a + 1) = t (a) + 1 + t (a) = 2 ∗ t (a) + 1

= 2 ∗ tb (a) + 1 = tb (a + 1)

(def’n of t) (arith)

(IH)

(def’n of tb)

tb :: Int -> Int

tb 0 = 0

tb n = 2*tb (n-1) + 1

39/50

Critique 2: Even tb is not tail recursive Compare the two programs:

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb (n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Observation. tt is even better (faster) than tb. Let’s see if we can prove it . . .

Goal. ∀n.tb (n) = tt (n).

40/50

Health Warning

The following slides contain lots of attempts of failed proofs.

don’t be discouraged: the example is difficult by design it’s intended to demonstrate how things can be fixed

41/50

Proof Take 1: Let’s just do it!

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

BaseCase. tb(0)=0(def’noftb)=ta00(def’nofta)=tt0 (def’n of tt)

StepCase. Assumethattb(n)=tt(n),provetb(n+1)=tt(n+1)

tb(n+1)=2∗tb(n)+1 = 2 ∗ tt (n) + 1

= 2 ∗ ta n 0 =???

= ta (n + 1) 0 = tt (n + 1)

(def’nof tb) (IH)

(def’n of tt) (we’re stuck!)

(def’n of tt)

42/50

Analysis of Failure

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

StepCase. Assumethattb(n)=tt(n),provetb(n+1)=tt(n+1)

Failure. We couldn’t go

from 2 ∗ ta n 0 (which we have obtained by applying IH) to ta (n+1) (which is equal to tt (n+1))

Analysis.

the recursion really happens in ta

so maybe need a statement that relates tb and ta?

43/50

Proof Take 2: Relate ta and tb

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0 Show. ∀n.tb (n) = ta (n) (0).

Base Case. Left as exercise

StepCase. Assumetbn=tan0,provetb(n+1)=ta(n+1)0

tb(n+1)=2∗tb(n)+1 = 2 ∗ ta n 0 + 1

= ???

= ta n (2 ∗ 0 + 1) =ta(n+1)0

(def’nof tb) (IH)

(stuck again!)

(def’nof ta)

44/50

Analysis of Failure, Again . . .

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Wewanted. 2∗tan0+1=tan(2∗0+1).

Problem. The second argument of ta changes!

Solution. Find a property that involves the second argument of ta.

45/50

Experiments

tb3 = 7

ta 3 0 = 7 ta 3 1 = 15 ta 3 2 = 23 ta 3 3 = 31

tb4 =15

ta 4 0 = 15 ta 4 1 = 31 ta 4 2 = 47 ta 4 3 = 63

WildGuess. Howabouttana=(tbn)+a∗(tbn+1)??

This would give

tb n = (tb n) + 0 ∗ (tb n + 1) = ta n 0 = tt 0

so would solve our problem.

46/50

Proof Take 3: Stronger Property

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb (n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

tt :: Int -> Int

tt n = ta n 0

Show. ∀n.∀a.tana=tbn+a∗(tbn+1). Base Case.

ta 0 a = a

= 0 + a ∗ (0 + 1)

=(tb0)+a∗((tb0)+1) so base case still works.

(def’n ta) (arith) (def’nof tb)

47/50

Proof Take 3: Stronger Property

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb (n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

Step Case.

tt :: Int -> Int

tt n = ta n 0

Assume ta n a = tb n + a ∗ (tb n + 1)

Show that ta (n + 1) a = tb (n + 1) + a ∗ (tb (n + 1) + 1)

ta (n + 1) a = ta n (2 ∗ a + 1) (def’n of ta) = tb n + (2 ∗ a + 1)(tb n + 1) (IH)

=2∗tbn+1+2∗a∗(mathtttbn+1) (lotsofarith) =tb(n+1)+a∗(tb(n+1)+1) (def’nof tb)

so step case also works!

48/50

Finally: Wrapping Up!

tb :: Int -> Int

tb 0 = 0

tb n = 2 * tb(n-1) + 1

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

Show. ∀n.tb n = tt n.

tt n = ta n 0

= (tb n) + 0 ∗ (tb n + 1) = tb n

(def’n of tt) (we have now!) (arith)

tt :: Int -> Int

tt n = ta n 0

49/50

Conceptual Digression

ta :: Int -> Int -> Int

ta 0 a = a

ta n a = ta (n-1) (2 * a + 1)

Changing Arguments.

ta is a two-place (binary) function recursion is on first argument (n)

but second argument (a) is not constant!

Solution.

find a stronger property that involves the second argument usually: universally quantified

Example.

P(n) = ∀a.ta n a = tb n + a ∗ (tb n + 1) as a is universally quantified, property holds for all a even if a changes in recursive call!

50/50