# CS代写 Bases de l’imagerie – cscodehelp代写

Bases de l’imagerie

Parcours IMA

BIMA – Exam 2

January 12, 2021 – 2pm

The grading of each exercise may be modified. The total is 20 points.

No document allowed. Your mobile phone should be off and put away. Calculator allowed. Please respect sanitary rules.

Duration: 1 hour

Exercise 1 Questions on the lectures (5 points)

1. Explain why the method based on zero-crossing of the Laplacian of an image leads to fine edges, in contrast to thresholding applied to the norm of the gradient of the image.

2. Explain two methods based on first order edge detection which allow computing a binary edge map with one pixel width edges.

3. What is the criterion used to determine the threshold value in Otsu’s method?

4. Is SIFT descriptor invariant under rotation? Justify.

5. Let us consider a face recognition problem. Is it a good idea to use descriptors that are invariant under rotation? Justify.

6. Is Principal Component Analysis (PCA) a relevant image descriptor? In that case, under which condition?

7. Explain the principle of Moravec detector. What is the main difference with Harris detec- tor?

8. Define the normalized derivative. To what is it useful?

Answer of exercise 1

1. A pixel is an edge point if there is a zero-crossing of the Laplacian in its 3×3 neighborhood. It is a local operation, taking into account the neighborhood. It is unlikely that several zero-crossings appear in the neighborhood, and if so, they will not correspond to a limit between homogeneous regions. In practice, the contours are 1-pixel thick. Thresholding the norm of the gradient does not provide such a guarantee (the neighborhood is not taken into account, and several neighborhood points can have a gradient norm exceeding the threshold).

Sorbonne Universit ́e 1 Master 1 Informatique

Bases de l’imagerie Parcours IMA

2. Order 1 detectors rely on the computation of the norm and direction of the gradient. A pixel will be labeled as edge if the norm of the gradient is maximal in the direction of the gradient. The gradient orientation can be discretized, using four directions, defining the neighbors in the discrete direction best approximating the actual gradient direction. A second method would be based on the interpolation of the gradient using the values of the 8 neighbors.

3. Otsu’s threshold is the value that minimizes the intra-class variance, the two classes being defined by the pixels having an intensity value below (respectively above) this threshold value. Equivalently, the inter-class variance can be maximized.

4. Yes, if normalization is applied (i.e. rotation according to the dominant orientation).

5. Usually it is assume that the acquisition is controlled, and that faces are aligned vertically. Then it is not useful to impose invariance, which could lead to a loss of information.

6. PCA is relevant if the data can be described on a small number of directions (principal components), while representing most of the variance of the data.

7. Moravec detector is based on the computation of the 8 auto-correlation values in 8 direc- tions, and taking the minimum value. A pixel is a corner if this value is a local maximum. Harris detector approximates this auto-correlation criterion. It has two main advantages: it is invariant by rotation, and it is more robust to noise if a smoothing is applied when computing the image derivatives.

8. The normalized derivative is obtained by multiplying by a factor s the derivative of an image filtered by a Gaussian filter of variance s. This provides a derivative that does not depend on the scale s.

Exercise 2 Integral images (5 points)

The integral image Iint, associated with an image I, is illustrated in Figure 1 and defined as: Iint(x, y) = I(x′, y′) (1)

x′ ≤x,y′ ≤y

1. Give the recurrence formula that allows computing Iint with only one scan of the image I

(remind that the computation is based on addition and subtraction of integral images).

2. Write a Python code that takes an image I as input and that returns the integral image

Iint.

In the following questions, we consider image I whose integral image is as follows:

Sorbonne Universit ́e

2 Master 1 Informatique

1

2 Iint = 3

2 3 4 5 4 7 9 11 7101417 9 14 19 23

4

5 11 17 23 28

Bases de l’imagerie Parcours IMA

3. The image I is decomposed into four blocks:

A B

CD

where A has 2 rows and 3 columns. Compute the integral images of A, B, C, and D from

Iint. For regions B, C and D, use subtraction of integral images and detail the computation.

4. Give the formula that allows recovering I from Iint.

5. Derive the values of I.

Figure 1: Integral image: Iint(x,y) is the sum of the values of I in the dashed top left corner.

Iint(x,−1) = 0 x ≥ −1 Iint(−1,y) = 0 y≥0

Answer of exercise 2

1.

2. def Iint(I): n,m= I.shape

I int = np.zeros((n,m),dtype=int)

# First row

I int[0,0] = I[0,0] for x in range(1,m):

I int[0,x] = I int[0,x−1] + I[0,x] for y in range(1,n):

# First colum

I int[y,0] = I int[y−1,0] + I[y,0] for x in range(1,m):

I int [y,x] = I int [y−1,x]+I int [y,x−1]−I int [y−1,x−1]+I [y,x] return I int

Sorbonne Universit ́e 3 Master 1 (x,y) = Iint(x,y−1)+Iint(x−1,y)−Iint(x−1,y−1)+I(x,y) x≥0,y≥0

Bases de l’imagerie Parcours IMA

3. A=Iint(2,1)=7.B=11−7=4,C=17−7=10,D=28−B−C−A=28−4−10−7=7

4. I(x,y)=Iint(x,y)−Iint(x,y−1)−Iint(x−1,y)+Iint(x−1,y−1)

1 1 1 1 1

1 1 2 1 1

5.1 2 0 2 1 1 1 2 1 1

11111

Exercise 3 Frequential filtering and edge detection (5 points)

The aim of this exercise is to define an edge detector based on filtering in the frequential domain.

1. Let Pa be the transfer function of the frequential filter. What should be its nature (low- pass, high-pass…)? Justify.

2. Propose an ideal filter, and give the values of Pa.

3. How to choose the cut-off frequency of the filter? Justify.

4. Write a Python code that takes an image as input and that returns the image filtered by Pa in the frequential domain.

5. Let us take as filter Pb = 1 − Pa. What is the nature of this filter? If the image is filtered by Pb, what will be the effect of this operation and what will be observed in the filtered image?

6. Let Ia and Ib be the images filtered by Pa and Pb, respectively (from the same image I). Prove that I = Ia + Ib

1 0 −1

7. The horizontal Prewitt filter writes: P = 1 0 −1

1 0 −1 ForanimageI,writeI⋆P(x,y)infunctionofI(x±1,y±1). Thespatialcoordinates

defining x and y are those of Figure 1.

8. Prove that I ⋆ P can be expressed as a linear combination of a shift of the image by one column to the right or to the left, and one row up or down. Boundary conditions will be ignored.

Answer of exercise 3

1. Edges are characterized by abrupt changes of image intensity in some direction. Therefore Pa should be a high-pass filter in that direction.

Sorbonne Universit ́e 4 Master 1 Informatique

Bases de l’imagerie

Parcours IMA

2. Examples could be:

Pa(f,g) = =

1 ifmax(f,g)≥d 0 otherwise

1 iff2+g2≥d 0 otherwise

d is the cut-off frequency.

3. The cut-off frequency should be high enough since edges are small scale structures. The

faster the transition, the higher the cut-off frequency should be.

4. def DectCont(I):

from numpy. fft import fft2 , ifft2 , fftshift n,m= I.shape

r = max(n ,m)−40 # high frequency

Fa = np. zeros ((n,m)) Fa[(n−r)//2:(n−r)//2+r,(m−r)//2:(m−r)//2+r] = np.ones((r,r))

Fa = fftshift(Fa)

return ifft2(fft2(lena)∗Fa)

5. Pb is a low-pass filter. An image filtered by Pb will be smoothed.

6.

7. I⋆P(x,y) = I(x+1,y−1)−I(x−1,y−1)+I(x+1,y)−I(x−1,y)+I(x+1,y+1)−I(x−1,y+1)

8. Note that J(x, y) = I(x + 1, y) corresponds to a shift of image I by one column to the left: pixel (x, y) in J has the value of pixel (x + 1, y) in I. Similarly, I(x − 1, y) corresponds to a shift by one column to the right, I(x,y + 1) to a shift by one row up, and I(x,y − 1) to a shift by one row down. Let us denote by Sl, Sr, Su and Sd the corresponding operators. Then we have: I ⋆ P = Sd(J) + J + Su(J) with J = Sl(I) − Sr(I).

Exercise 4 Pattern recognition (5 points)

Let E be a set of N individuals. An individual n is described by a vector Xn ∈ Rd, and is assigned to a class described by a scalar value Yn ∈ {1, · · · , K}.

The objective of this exercise is to determine the class y of an individual x ∈ Rd which does not belong to the set E. Methods based on PCA and LDA will have to be described.

Sorbonne Universit ́e 5 Master 1 Informatique

TF−1(TF(I) × Pa) + TF−1(TF(I) × Pb)

Ia + Ib =

= TF−1(TF(I) × Pa + TF(I) × Pb)

= TF−1(TF(I) × (Pa + Pb))

= TF−1(TF(I)) = I

since Pa + Pb = 1 and the inverse Fourier transform is linear.

Bases de l’imagerie Parcours IMA

1. What is the criterion optimized in the PCA method? How is it computed? What does it mean in terms of data representation?

2. Design a recognition method based on PCA. Write an algorithm to perform the learning step, and then an algorithm to perform the recognition step. Is the learning supervised?

3. What is the criterion optimized in the Linear Discriminant Analysis (LDA) method? How is it computed? What does it mean in terms of data representation?

4. Design a recognition method based on LDA. Write an algorithm to perform the learning step, and then an algorithm to perform the recognition step. Is the learning supervised?

Answer of exercise 4

1. PCA aims at representing the data on an orthornomal basis, composed of a reduced number of vectors, such that the projected variance is maximal. If v is such a vector, the criterion is to maximize vtΣv, where Σ is the covariance matrix of the centered data. Solving this problem, under the constraint that v is a unit vector, amounts to compute the eigenvectors and eigenvalues of Σ, and choose the eigenvectors corresponding to the highest eigenvalues. The projected data in the new basis are better decorrelated among the directions.

2. Learning:

(a) Compute the covariance matrix of the data in E: Σ = 1 XXT with X = (X1 −

N−1 g,X2 −g,··· ,XN −g) and g = N1 n Xn (centered data).

(b) Compute M < N eigenvectors (principal directions) Vi , i = 1 · · · M of Σ, ordered by decreasing eigenvalues (λ1 ≥ λ2 ≥ · · · ). M is a hyper-parameter of the method.
(c) Projection of the data on the M << d directions: π(Xn) = (XnT V1, · · · , XnT VM ). Each individual n is described by M scalar values.
This learning phase is not supervised, since it does not use the prior knowledge on the classes Yn of the individuals.
Recognition:
(a) Project the new individual x on the M principal directions: π(x) = (xT V1, · · · , xT VM ).
(b) Find the projected individual π(Xi) which is the closest to π(x) according to the Euclidean distance: i = argminn∥π(Xn) − π(x)∥2.
(c) Assign the new individual x to class Yi.
3. LDA consists in finding an orthonormal basis such that the projected data on the vectors
of the basis have a minimal intra-class variance and a maximal inter-class variance. This is
expressed as minimizing vtΣv and is solved by computing the eigenvectors of Σ−1B where vtBv
Σ is defined as above and B is the inter-class variance matrix. The projected data form clusters where data belonging to the same class or cluster are close to each other, while the clusters are best separated.
4. Learning:
Sorbonne Universit ́e 6 Master 1 Informatique
Bases de l’imagerie Parcours IMA
(a) Compute Σ as for PCA, and B = 1 K Nk(gk−g)(gk−g)T with gk = 1 Xn
andNk= 1. n|Yn =k
N k=1 Nk
n|Yn =k
(b) Compute the eigenvectors (Vn) of Σ−1B ordered by decreasing eigenvalues.
Here the learning is supervised since it exploits the prior knowledge on the classes of the
individuals. Recognition:
(a) Project x on the new basis.
(b) Assign x to the closest class k, i.e. minimizing ∥π(x) − π(gn)∥.
Sorbonne Universit ́e 7 Master 1 Informatique