# CS计算机代考程序代写 Haskell algorithm Higher Order Functions in Haskell

Higher Order Functions in Haskell

Today:

Lazy Evaluation

Evaluation Strategy

A programming language’s policy on when to compute an expression that is being passed as an argument

Before being passed? Or after?

Definitely going to evaluate?

Or perhaps not?

Eager/Strict/Applicative Order Evaluation

All arguments are evaluated, and they are evaluated before passing them to a subroutine.

Circle is called once.

Evaluation order?

Eager / Strict / Applicative

(λb.b (λx.λy.x λa.a))

(λb.b λy.λa.a)

λy.λa.a

Lazy / Non-strict / Normal

(λb.b (λx.λy.x λa.a))

(λx.λy.x λa.a)

λy.λa.a

(λb.b (λx.λy.x λa.a))

Normal Order Evaluation

A representation of the unevaluated argument is passed to the subroutine, and then each is evaluated at the last possible moment as many times as it appears:

(sum-of-squares (+ 5 1) (* 5 2))

(+ (square (+ 5 1)) (square (* 5 2)))

(+ (* (+ 5 1)(+ 5 1)) (* (* 5 2)(* 5 2)))

(+ (* 6 6) (* 10 10))

(+ 36 100)

136

Lazy Evaluation

A representation of the unevaluated argument is passed to the subroutine, and then evaluated at the last possible moment.

When an argument is finally evaluated in a lazy language, the result of the evaluation is saved (memoized). Further uses of the argument in the function use the saved value.

Thus the original argument is evaluated at most one time, and as we’ll see, possibly not at all. Lazy is a sub-category of normal order, but is as efficient as applicative order evaluation.

(sum-of-squares (+ 5 1) (* 5 2))

(+ (square (+ 5 1)) (square (* 5 2)))

(+ (* (+ 5 1)(+ 5 1)) (* (* 5 2)(* 5 2)))

(+ (* 6 6) (* 10 10))

(+ 36 100)

136

Haskell laziness

> head []

*** Exception: Prelude.head: empty list

> ( x -> x + 1) 3

4

> ( x -> 3) (head [])

8

First results are shown on student handouts.

Haskell laziness

> head []

*** Exception: Prelude.head: empty list

> ( x -> x + 1) 3

4

> ( x -> 3) (head [])

3

9

Why be lazy? Part One

Like people, sometimes a language can delay computational work to the point of not having to do it at all.

So laziness is in large part about efficiency: not having to do more work than necessary.

For Example

Here’s a code snippet in Scala, which is a multi-paradigm language that evaluates eagerly by default, but can evaluate lazily if you explicitly ask it to. Here, it’s evaluating eagerly.

What is the output?

called…

no result

For Example

SHOULD the output have been “called…no result”?

Arguably not: compute(5) did not actually need to be evaluated.

In the Boolean expression (if x > 5 && temp > 10), the value of temp and therefore the result of compute(5) is not needed, because x is 4.

For Example

This is the same code, except now I have annotated the temp variable with the word “lazy.” Here, we are taking advantage of Scala’s ability to evaluate lazily when explicitly asked to do so. The variable temp will not be evaluated until necessary.

So what will the output be?

“no result”

In other words, compute was never evaluated. It didn’t have to be!

The moral of the previous Scala example was: if you can ignore computation, you can save work – and ignoring computation is only possible with a lazy evaluation strategy.

The efficiency of lazy evaluation is particularly clear in this example, using Haskell:

f x y = if unlikely condition

then mod y 2

else x + 2

As we know, under the hood, this Haskell function is really:

f = λx -> λy -> if unlikely condition

then mod y 2

else x + 2

Now, what if we called:

f 5 (2935792) ? Holy moly, that exponent is a lot of work.

If Haskell followed applicative order evaluation, we would always have to evaluate the argument y, even though y is rarely used. But because Haskell is lazy, in fact (2935792) is passed unevaluated and rarely ends up getting computed at all.

Why be lazy? Part Two

Laziness in Haskell allows for certain kinds of computation on infinite lists! Since laziness only evaluates as much as is actually needed, infinite data is only generated as it is consumed, and not – impossibly – all at once.

Here’s how you write infinite lists:

[1..]

[1,3..]

No upper bound.

17

> ones = 1 : ones

> ones

17

sum[1..]

Just doesn’t work: you can’t sum an infinite list.

18

Infinite lists

Really, infinite lists are most practical when you separate control from data, where your data is infinite and it’s the control that limits it:

take 3 [1..]

[1,2,3]

Why couldn’t you do this with an eager language?

Because the [1..] would have to be evaluated and then passed into take, which is impossible.

Let’s see why this works in Haskell.

Why does take 3 [1..] work, lazily?

First, let’s implement take:

Participation Quiz

Why does take 3 [1..] work, lazily?

First, let’s implement take:

myTake :: Int -> [a] -> [a]

myTake 0 _ = []

myTake _ [] = []

myTake n (x:xs) = x : myTake (n-1) xs

Next, let’s see how take 3 [1..] traces (on board).

If you don’t have (good) notes on this, here is a great online resource:

Why be lazy? Part Three

Because it enables modularity! Modularity is the ability to break a program into parts that can be put back together. Modularity makes code reusable, which is always a good thing.

Example:

Newton-Raphson algorithm for approximating the square root of a number

Lazy functional languages let you separate the looping condition from the body of the loop!

Not modular, applicative and imperative code for finding square root

squareRt(c, tol):

t = c

while( abs(t – c/t) > tol*t):

t = (c/t + t)/2

return t

Not modular, because can’t separate the condition from the loop body.

Modular code for finding square root: lazy and functional

/docProps/thumbnail.jpeg