Further to my previous post on Lambda Calculus, here I’ll share implementation of Beta Reduction.
Formally, beta reduction (also written β-reduction
) is the replacement of a bound variable in a
function body with a function argument. The purpose of β-reduction
is to compute the result of a
function by function application using specific rules.
Some helper functions:
applyL :: Term -> Term -> Term
applyL a b = Apply a b
applyR :: Term -> Term -> Term
applyR a b = Apply b a
lambda :: Var -> Term -> Term
lambda x m = Lambda x m
In applying beta-reduction we will follow these steps:
Normal‐order reduction, also known as leftmost outermost reduction and standard reduction, is a valuable reduction strategy because normal‐order reduction always produces an answer, if there is one.
Outermost reduction strategies always reduce an outermost redex, i.e., a redex which is not inside any other redex.
Putting this reduction strategy into algorithmic terms:
λx.M′
, the whole term is (λx.M′)Ns
. Reduce this redex.
beta :: Term -> [Term]
beta (Apply (Lambda x m) n) = (substitute x n m):(concat [betaM, betaN])
where
betaM = map (applyR n) (map (lambda x) (beta m))
betaN = map (applyL (lambda x m)) (beta n)
beta (Apply m n) = concat[(map (applyR n) (beta m)), (map (applyL m) (beta n))]
beta (Lambda x m) = map (lambda x) (beta m)
beta (Variable v) = []
We will need four pattern-matching cases: one to see if the term is a redex, and if not, the three
usual cases for Term
to look further down in the term (as mentioned previously).
In the first case, we should consider looking for further redexes as well. Since beta
returns a list,
we will have to be mindful of recursive calls.
normalize :: Term -> [Term]
normalize t | (null (beta t)) = t:[]
| otherwise = t:(normalize (beta t !! 0))
normal :: Term -> Term
normal t | (null (beta t)) = t
| otherwise = normal (beta t !! 0)
Applicative-order reduction is the leftmost innermost reduction strategy. That is to say, you look for all the innermost redexes, and then pick the leftmost one and reduce that.
Here is applicative-order reduction as an algorithm:
Because this is an innermost strategy, we only reduce a redex when we are sure that there are no other redexes inside it.
a_beta :: Term -> [Term]
a_beta (Apply (Lambda x m) n) = concat [a_betaM, a_betaN,[(substitute x n m)]]
where
a_betaM = map (applyR n) (map (lambda x) (a_beta m))
a_betaN = map (applyL (lambda x m)) (a_beta n)
a_beta (Apply m n) = concat[(map (applyR n) (a_beta m)), (map (applyL m) (a_beta n))]
a_beta (Lambda x m) = map (lambda x) (a_beta m)
a_beta (Variable v) = []
a_normalize :: Term -> [Term]
a_normalize t | (null (a_beta t)) = t:[]
| otherwise = t:(a_normalize (a_beta t !! 0))
a_normal :: Term -> Term
a_normal t | (null (a_beta t)) = t
| otherwise = a_normal (a_beta t !! 0)
To demonstrate our understanding, here are two examples where:
example1
reduces to normal form in more steps with normal order than with applicative order reductionexample2
reduces to normal form in fewer steps with normal order than with applicative order reduction
identity = Lambda "x" (Variable "x")
dup = Lambda "x" (Apply (Variable "x") (Variable "x"))
example1 :: Term
example1 = Apply (numeral 3) (numeral 3)
example2 :: Term
example2 = Apply (Lambda "y" identity) example1
We can compare this to Haskell, which uses lazy evaluation/call by need
Thanks for reading!