COMPILER OPTIMISATION I
Copyright By cscodehelp代写 加微信 cscodehelp
• We will consider a set of optimisations which a typical optimising compiler might perform.
• We will illustrate many transformations at the source level.
• important to remember that compiler usually parses code into an internal Intermediate Representation (IR) first and is making transformations at IR level
These are (largely) optimisations which you would expect a compiler to do, and should very rarely be hand-coded.
• IR level optimisations
• Basic optimisations
• Redundancy elimination
• Loop optimisations
• Procedure call optimisations
Begin by looking at some simple optimisations: • Constant folding
• Algebraic simplifications
• Copy propagation
• Constant propagation
• Fairly complex transformations can be achieved by combining simple ones.
• Requires multiple optimisation passes
• Where possible, replace variables with constants at compile time.
• Reduces memory references (value stored in instruction, rather than in data).
• Expressions that only involve constants can be evaluated at compile time
• Take care with expressions which might result in errors (overflow, divide by zero).
• For floating point, must ensure that compiler will generate same results as executed code.
integer, parameter :: n=100
do i=1,n ….
do j=1,n/2 ….
do i=1,100 ….
do j=1,50 ….
• Compiler can recognise and simplify algebraic expressions.
• Very simple examples: i+0 i
i + (-j) i-j i**2 i*i
• Can also make use of associativity and commutativity: (i-j)+(i-j)+(i-j) 3*i – 3*j
• Many more restrictions apply to floating point than to integers
• floating point operations are not associative Suppose MF is largest representable floating point
value 1.0 + (MF – MF) = 1.0 (1.0 + MF) – MF = 0.0
• Given an assignment (x=y), we can replace later uses of x with y, provided no changes to either x or y have occurred in the meantime.
• Can reduce memory references, or number of registers required
c =x+ 3 c = y+ 3 d =x+ y d = y+ y
• If a variable x is assigned a constant value, then subsequent references to x can be replaced by the constant, provided no changes to x occur in the meantime.
• Reduces number of registers required
• Aids later transformations, such as constant
c =x+ 3 c =4+ 3 d =x+ y d =4+ y
The next set of transformations deal with removing redundant computations.
• Common subexpression elimination (CSE)
• Loop invariant code motion
• Algebraic simplification in redundancy elimination • Dead code elimination
• If a part of an expression occurs twice, and the variables involved are not changed between the two evaluations, then the computation can be done once and the result stored in a temporary.
• May not always be beneficial
• increases number of registers required
a = b + (c – 2) d = 4 *b
e = d + (c – 2)
t1 = c – 2 a = b + t1 d = 4 *b e = d + t1
Loop invariant code motion
• Expressions which occur inside a loop, but give the same result on all loop iterations, can be computed once, outside the loop.
• Can be applied to more than one level of loop
do i = 1,n do j = 1,n
do i = 1,n
t2 = 2*b(i+1) do j = 1,n
a(j,i)=t2 + j – t1
Algebraic simplification again
• Use of algebraic simplification can help in redundancy elimination
do i=1,n a=b+i c=a-i d=b+i
do i=1,n a=b+i c=a-i
d=a end do
do i=1,n a=b+i
Dead code elimination
• Removes code whose results are never used.
• May be used several times in optimisation process,
as other optimisations may create dead code. • Something to beware of in small benchmarks!
d = a+b; printf(“%d
d = a+b; printf(“%d
Simple loop optimisations
This set of optimisations apply to loops
N.B. Need “well-structured” loops, (Fortran DO loop) – to apply to C or Java, need to identify a class of suitable for loops.
• Strength reduction
• Induction variable removal
• Bounds checking elimination
• An induction variable is a variable in a loop whose value on successive iterations forms an arithmetic progression.
a(j) = b(i)
• Replaces explicit computation based on loop iterator with increments
• Reduces number of operations required at the
expense of more registers.