# CS代考 ARMA Models Properties of MA(q) – cscodehelp代写

ARMA Models Properties of MA(q)

7.5 Properties of MA(q)

We will look at some properties of MA(q) processes

Xt =μ+Wt +θ1Wt−1 +θ2Wt−2 +…+θqWt−q.

Finite MA(q) are always stationary. We will explore Expectation

Autocovariance .

This is easier for MA(q) than for AR(q). Remember we transformed AR(1) into an MA(∞) to derive their properties.

75/97

ARMA Models Properties of MA(q)

Backshift of MA(q) Model

The backshift operator representation of an MA(q) model is

given by:

Xt = Wt +θ1Wt−1 +θ2Wt−2 +···+θqWt−q

= θ(B)Wt

With the so called moving average operator is then:

θ(B) = 1 + θ1B + θ2B2 + · · · + θqBq The characteristic polynomial for MA(q) is

θ(z) = 1 + θ1z + θ2z2 + · · · + θqzq with z ∈ C.

Note that the signs in the characteristic polynomial are different from AR(p).

76/97

ARMA Models Properties of MA(q)

Expectation

E[Xt]=E[μ+Wt +θ1Wt−1 +θ2Wt−2 +…+θqWt−q]

= μ + E [Wt ] + θ1 E [Wt −1 ] + θ2 E [Wt −2 ] + . . . + θq E [Wt −q ] = μ because E[Wt] = 0.

Variance Let us define θ0 = 1.

Var(Xt)=Var(μ+Wt +θ1Wt−1 +…+θqWt−q)

Note:

= Var (Wt ) + Var (θ1 Wt −1 ) + . . . + Var (θq Wt −q )

= V a r ( W t ) + θ 12 V a r ( W t − 1 ) + . . . + θ q2 V a r ( W t − q ) q

=σW2 (θ02+θ12+…θq2)=σW2 θj2 j=0

In general, Var (A + B ) = Var (A) + Var (B ) + 2Cov (A, B ). Because Wt, Ws are uncorrelated for t ̸= s,

Var(Wt + Ws) = Var(Wt) + Var(Ws).

77/97

ARMA Models Properties of MA(q)

Autocovariance Let |h| ≤ q

qq γX (h) = Cov μ + θj Wt−j , μ + θj Wt+|h|−k

j=0 k=0 qq

= Cov θjWt−j,θkWt+|h|−k

j=0 k=0 qq

= Cov θjWt−j,θkWt+|h|−k j=0 k=0

q−|h| q−|h|

= Cov θjWt−j,θj+|h|Wt = θjθj+hCov (Wt−j,Wt)

j=0

q−|h|

=σW2 θjθj+h.

j=0

If |h| > q, γX (h) = 0

j=0

78/97

ARMA Models Properties of MA(q)

Autocorrelation

If |h| ≤ q,

γ (h) γ (h) σ2 q−|h| θj θj+h q−|h| θj θj+h

X X Wj=0 j=0 ρX(h)=Var(Xt)=γX(0)= σW2 qj=0θj2 = qj=0θj2

In general,

q−|h| θ θ j=0 j j+h

ρX (h) = qj=0 θj2 0

if |h| ≤ q if |h| > q

This proves that MA(q) processes with q < ∞ are stationary.
79/97
ARMA Models Properties of MA(q)
Example: MA(2) autocovariance and autocorrelation
Consider the MA(2) model
Xt = Wt + Wt−1 + Wt−2
The autocovariance function is
..., γX(−3) = 0, γX(0) = 3, γX (3) = 0,
γX(−2) = 1, γX(1) = 2,
γX(−1) = 2, γX(2) = 1,
The autocorrelation function is
..., ρX(−3) = 0,
ρX(0)=1, ρX (3) = 0,
ρX(−2) = 13,
ρX(1)= 23, . . .
ρX(−1) = 32, ρX(2)= 31,
. . .
80/97
ARMA Models Properties of MA(q)
Example: MA(2) autocovariance
MA(2): Xt = (Wt + Wt−1 + Wt−2) time series
0 100 200
Estimated autocovariance
300
400 500
Estimated autocorrelation
Time, t
0 5 10 15 20 25
lag, h
0 5 10 15 20 25
lag, h
Note that the sample autocovariances are close to the true values.
81/97
0.0
0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.2 0.4 0.6 0.8 1.0
Autocovariance, gamma(h)
Autocorrelation, gamma(h)
Xt
−4 −2 0 2 4
ARMA Models Properties of MA(q)
Identifiability and invertibility of MA model
This looks like we can identify the coefficients of an MA(q) from the autocorrellation function. However, this is not true.
It is common to impose an invertibility condition to ensure the model is identifiable from the autocorrelation function.
Identifiability and invertibility are closely related.
82/97
ARMA Models Properties of MA(q)
Example: Identifiability of MA(1)
Consider the following MA(1) models (for θ1 ̸= 1): A: Xt = Wt +θ1Wt−1
B: Xt=Wt+θ1Wt−1 1
Both models have the exact same autocorrelation function: ρX(0) = 1 ρX(±1) = θ1
1 + θ 12 ρX(±h) = 0 for h = 2,3,...
Hence, they are not uniquely identifiable from the autocorrelation function.
In general the autocorrelation function can be used to determine the order of the MA model, but not the coefficients.
83/97
ARMA Models Properties of MA(q)
Example: Identifiability of MA(1)
Let’s assume that |θ1| < 1 and hence, |1/θ1| > 1 Invert the models by successive substitution:

A : W t = X t − θ 1 X t − 1 + θ 12 X t − 2 − . . . B: Wt = Xt− 1Xt−1+ 1Xt−2−…

those for (B) go to ∞.

We say model (B) cannot be inverted in this case.

Note

The root of the characteristic polynomial 1 + θ1z

θ1 θ12

Since |θ1| < 1, the coefficients for (A) converge to 0, whereas
84/97
ARMA Models Properties of MA(q)
Invertibility of MA(q)
An MA(q) process is called invertible if there exists constant
π0,π1,... such that
∞
and
Wt =πjXt−j i=0
∞
forallt
|πj| < ∞. i=0
Effectively this means the MA(q) can be rewritten as an AR (maybe of infinite order) with the coefficients a convergent sum.
Thus, in the previous example, model (A) is invertible since |θ1| < 1.
85/97
ARMA Models Properties of MA(q)
Invertibility of MA(q) Theorem
An MA(q) model is invertible if and only if the complex roots of its characteristic polynomial are outside of the unit circle, i.e.
θ ( z ) = 1 + θ 1 z + θ 2 z 2 + · · · + θ q z q = 0 =⇒ | z | > 1 . This theorem looks similar to the one we had for AR(p).

However, this is not about stationarity, because MA(q) is always stationary.

When we work with ARMA models, the MA(q) part will be invertible.

We use the concise notation

Xt=θ(B)Wt and Wt= 1 Xt θ(B)

86/97

ARMA Models Properties of MA(q)

ARMA notation

A mean zero ARMA(p, q) model (without intercept)

Xt −φ1Xt−1 −···−φpXt−1 =Wt +θ1Wt−1 +…θqWt−q.

with stationary AR(p) and invertible MA(q) can be written in the following ways

φ(B)Xt = θ(B)Wt Xt = θ(B)Wt

φ(B ) φ(B)Xt =Wt

Note

The second form is the Wold decomposition of the model. Many finite and infinite sequences can be generated with small p and q.

θ(B)

87/97

ARMA Models

Partial autocorrelation function

7.6 Partial autocorrelation function

An MA(q) model has zero autocorrelation after lag q. So it is easy to identify the order q of an MA using the

autocorrelation function.

In contrast, the autocorrelation for AR(p) never goes back to

0 regardless of the order p.

Usually, the acf of an AR(p) is difficult to interpret.

The acf cannot be used to determine the order p of an AR(p).

We introduce the partial autocorrelation function (PACF) for this purpose.

88/97

ARMA Models

Partial autocorrelation function

Definition of the partial autocorrelation function

Definition

The partial autocorrelation function (PACF) of a stationary time series, Xt, denoted φhh for h = 1,2,… is given by

forh=1: φ11 =Corr(Xt,Xt−1)=ρX(1)

for h ≥ 2 : φhh = Corr (Xt − Xˆt , Xt −h − Xˆt −h )

where Xˆt are the fitted values of the regression of Xt on preceding lower lags {Xt−1, Xt−2, . . . , Xt−(h−1)}.

The PACF at lag h is the autocorrelation between Xt and Xt−h, after the effect of the lower lags

Xt−j , j = 1, 2, . . . , h − 1 has been accounted for.

Beware: This notation is easily confused with the AR coefficients. Note the double rather than single subscript.

89/97

ARMA Models

Partial autocorrelation function

Why use the PACF for an AR(p)?

There is a good reason for the similar notations φhh and φh. IfXt ∼AR(1),thenφhh =0forh>1andφ11 =φ1. IfXt ∼AR(2),thenφhh =0forh>2andφ22 =φ2.

.

If Xt ∼AR(p). then φhh = 0 for h > p and φpp = φp.

Hence, the PACF can help to determine the order p of an AR(p).

The PACF also gives an estimate for the highest AR(p) coefficient φp. However, there are better ways to fit ARMA models, as we will see later.

90/97

ARMA Models

Partial autocorrelation function

Example: PACF of an AR(2)

Autoregression AR(2): Xt = Xt−1 − 0.9Xt−2 + Wt Description of dependence:

0 100 200 300 400

Time, t

Autocorrelation

500

Long clusters

Positive dependence at neighbouring points

Switching behaviour within cluster

Partial autocorrelation

●

●

●

●

sample acf ● true acf

●●

●

●● ●

●●

●

●

●

●

●

●●

●

●

●●

●

●

●

●

●

●

●

●●●●●●●●●●●●●●●●●●●●●●●●●

sample pacf ● true pacf

●

●

0 5 10 15 20 25 0 5 10 15 20 25

lag, h lag, h

91/97

−0.8

−0.6 −0.4 −0.2 0.0 0.2

0.4

Autocorrelation, rho(h)

−0.5 0.0 0.5

1.0

Partial Autocorrelation, phi.hh

Xt

−5 0 5

ARMA Models

Partial autocorrelation function

Example: PACF of an AR(1)

Autoregression AR(1): Xt = 0.5Xt−1 + Wt

Description of dependence:

0

100 200 300 400

Time, t

Autocorrelation

500

Short clusters as dependence decays quickly

Positive dependence within cluster

Partial autocorrelation

●

●

●

●

●

●

●●

●●●●●●●●●●●●●●●●●●●●●

sample acf ● true acf

●●●●●●●●

●●●●●●●●●

●●●●●●●●●

sample pacf ● true pacf

●

●

0 5 10 15 20 25 0 5 10 15 20 25

lag, h lag, h

92/97

−0.1

0.0

0.1 0.2 0.3

0.4

0.0

0.2 0.4 0.6

0.8

1.0

Autocorrelation, rho(h)

Partial Autocorrelation, phi.hh

Xt

−2 0 2 4

ARMA Models

Partial autocorrelation function

Example: PACF of an AR(1)

Autoregression AR(1): Xt = −0.9Xt−1 + Wt

0

100 200 300 400

Time, t

Autocorrelation

500

Description of dependence: Long clusters

Strong switching at every timepoint within cluster

Partial autocorrelation

●

●

●

●●

sample acf ● true acf

●

●

●

●

●

●

●●

●

●

●

●

●

●

●

●

●

●

●●●●●

sample pacf ● true pacf

●●●●●●●●●●●●●●

●●●●●●●●●●●●

●

●

●

0 5 10 15 20 25 0 5 10 15 20 25

93/97

−0.8

−0.6 −0.4 −0.2

0.0

−0.5

0.0 0.5

1.0

Autocorrelation, rho(h)

Partial Autocorrelation, phi.hh

Xt

−6 −4 −2 0 2 4 6

lag, h lag, h

ARMA Models

Partial autocorrelation function

Example: complex AR(3) structure

Autoregression AR(3): Xt = 0.9Xt−1 − 0.5Xt−2 + 0.3Xt−3 + Wt Description of dependence:

0 100

200 300 400

Time, t

500

Complex dependence

Mixture of positive and negative dependence within cluster, which can cancel out

Structure hard to tell from ACF but easier from PACF

Partial autocorrelation

Autocorrelation

●

●

●

●

●

●

●

●●

●

●●

●●●●

●●●●●●●●●●●●

sample acf ● true acf

●

●

●

●●●●●●●

●●●●●●●●●●●●●●●●●

sample pacf ● true pacf

●

●

0 5 10 15 20 25 0 5 10 15 20 9245/97

0.0

0.2 0.4 0.6

0.8 1.0

−0.2

0.0 0.2

0.4

0.6

Autocorrelation, rho(h)

Partial Autocorrelation, phi.hh

Xt

−4 −2 0 2 4

ARMA Models

Partial autocorrelation function

PACF of MA(q)

We saw that an MA(q) can be written as

∞

Xt =πj Xt−j +Wt.

j=1

Hence, the PACF can be non-zero for all lags.

In the special case of an invertible MA(1)

Xt =μ+Wt +θ1Wt−1 with|θ1|<1,itcanbeshownthat
(−θ1)h(1 − θ12) φhh = 1 − θ2(h+1) .
1
This PACF decays to 0 at higher lags, but never equals 0.
95/97
ARMA Models
Partial autocorrelation function
Example: PACF of MA(1)
Moving Average MA(1): Xt = Wt + 0.9Wt−1
0
100 200 300 400
Time, t
Autocorrelation
500
Description of dependence: Always short clusters for low-order MA
Clusters of size 2
Strong positive dependence in cluster
Partial autocorrelation
●
●
●●●●●●●●●●●●●●●●●●●●●●●●●●
sample acf ● true acf
●
●
●
●
●
●
●
●
●
●●
●●●●
●●●
●●
●●●●
sample pacf ● true pacf
●
●
●
●
●
0 5 10 15 20 25 0 5 10 15 20 25
96/97
0.0
0.2 0.4 0.6
0.8
1.0
Autocorrelation, rho(h)
Partial Autocorrelation, phi.hh
−0.2 0.0 0.2
0.4
−4 −2 0 2 4
Xt
lag, h lag, h
ARMA Models
Partial autocorrelation function
Summary of autocorrelation behaviour
Let’s summarise ACF and PACF properties for stationary AR(p) and invertible MA(q) models.
AR(p) ACF Tails off
MA(q)
Cuts off after laq q PACF Cuts off after laq p Tails off
The PACF is useful for determining the order of an AR(p) and the ACF for the MA(q).
97/97