# CS代考 ARMA Models Parameter redundancy – cscodehelp代写

ARMA Models Parameter redundancy
7.8 Parameter redundancy
Remember: A zero mean ARMA(p, q) model is defined by
Xt −φ1Xt−1 −···−φpXt−1 = Wt +θ1Wt−1 +…θqWt−q φ(B)Xt = θ(B)Wt
with AR and MA polynomials (φp ̸= 0 and θq ̸= 0) φ(B) = 1−φ1B−···−φpBp
θ(B) = 1+θ1B+···+θqBq
and white noise, Wt ∼ WN (0, σW2 ) with finite variance σW2 > 0.
Possibly infinite MA can produce any stationary time series. We need to be careful that some features of the AR part are not already modelled by the MA part.
This could make some of the parameters in the model redundant.
119 / 138

ARMA Models Parameter redundancy
Example: Parameter redundancy
(Example 3.4 of Shumway and Stoffer) Consider the white noise process Xt = Wt .
Let’s consider lag 1 and rescale, to give 0.5Xt−1 = 0.5Wt−1
which can then be subtracted off both sides and rearranged Xt − 0.5Xt−1 = Wt − 0.5Wt−1.
This looks like an ARMA(1,1), but Xt is still white noise which is hidden by a parameter redundancy.
Let’s rewrite the possible ARMA(1,1) in backshift form (1 − 0.5B)Xt = (1 − 0.5B)Wt
Notice that there is the common factor (1 − 0.5B) on both sides. If we devide by the common factor, we get the original white noise model Xt = Wt .
120 / 138

ARMA Models Parameter redundancy
Common factors of the characteristic polynomial
Every polynomial factorises over the complex numbers.
This means for the characteristic polynomials of the AR and MA components
φ(x)=a0(a1 −z)(a2 −z)…(ap −z)
θ(x)=b0(b1 −z)(b2 −z)…(bq −z).
Hence, a1,…ap and b1,…bp are the roots of the characteristic polynomials.
The polynomials have a common factor if and only if the have a common root.
To check for parameter redundancy, we can look for common roots of the characteristic polynomials.
The characteristic polynomials in the example above
φ(z) = (1 − 0.5z) = θ(z) make the parameter redundancy obvious.
121 / 138

ARMA Models Parameter redundancy
Characteristic polynomials are key!
Three key features of the ARMA(p, q) can be checked using the characteristics polynomials.
AR(p) is stationary and invertible, when the roots of the characteristic polynomial φ(z) are outside the unit circle |z| > 1.
MA(q) is invertible if the roots of the characteristic polynomial θ(z) are outside the unit circle |z| > 1.
ARMA(p, q) has no parameter redundancy if the characteristic polynomials φ(z) and θ(z) have no common factors.
122 / 138

ARMA Models Parameter redundancy
Example: Parameter Redundancy
(Example 3.7 of Shumway and Stoffer) Consider the process
Xt − 0.4Xt−1 − 0.45Xt−2 = Wt + Wt−1 + 0.25Wt−3 The characteristic polynomials are factorised as
φ(z) = 1−0.4z−0.45z2 =(1+0.5z)(1−0.9z) θ(z) = 1+z+0.25z2 =(1+0.5z)2
> polyroot(c(1, -0.4, -0.45))
 1.111111-0i -2.000000+0i
> polyroot(c(1, 1, 0.25))
 -2-0i -2+0i
So the AR component is stationary and the MA is invertible.
But both polynomials have a common factor due to the common root −2.
123 / 138

ARMA Models Parameter redundancy
Example: Parameter Redundancy
Xt − 0.4Xt−1 − 0.45Xt−2 = Wt + Wt−1 + 0.25Wt−3 Cancelling the common factor, shows this is actually the
ARMA(1, 1) model given by
(1 − 0.9B)Xt = (1 − 0.5B)Wt
Xt − 0.9Xt−1 = Wt + 0.5Wt−1.
It is a stationary because the AR root z = 10/9 is outside the
unit circle.
It is invertible because also the MA root z = −2 is the outside unit circle.
124 / 138

ARMA Models Parameter redundancy
Example: Parameter Redundancy
ARMA(1, 1)
Xt − 0.9Xt−1 = Wt + 0.5Wt−1
0
Autocorrelation
200 400
600
800 1000
Partial autocorrelation

sample acf
● ARMA(2,2) acf ARMA(1,1) acf
●●
●●
Time, t

sample acf
● ARMA(2,2) acf ARMA(1,1) acf

●●●●●●●●●●●●●●
0 5 10 15 20
5 10 15 20
lag, h
lag, h
125 / 138

−0.4
−0.2
0.0 0.2 0.4
0.6
0.8
0.0
0.2 0.4 0.6
0.8
1.0
Autocorrelation, rho(h)
Partial Autocorrelation, phi.hh
−10 −5 0 5
Xt

ARMA Models Forecasting
7.9 Forecasting
How does forecasting work with an ARMA(p, q) models? Assume we have a sample of data collected to the present
X1:n = {X1,X2,…,Xn}.
The parameters of the ARMA(p, q) are estimated from this
historical data.
In the following slides we will utilise the true parameters for notational convenience, which in practice will be replaced by their sample estimates.
For sake of simplicity, we will neglect the error coming from estimating the ARMA coefficients.
We will only consider stationary and invertible models.
126 / 138

ARMA Models Forecasting
Forecasting errors and loss function
We are interested in the m step ahead forecast X ̃n+m|n of Xn+m after the current timepoint n.
The mean squared error is the most commonly used “loss function” for evaluating the performance of forecasts
MSE(X ̃n+m|n) = E 􏰌(Xn+m − X ̃n+m|n)2􏰍
= Var(Xn+m − X ̃n+m|n) + 􏰄E[Xn+m − X ̃n+m|n]􏰅2
A straight forward forecast is the conditional expectation
X ̃n+m|n = E(Xn+m|X1:n). WhereX1:n denotesX1,X2,…,Xn.
This forecast minimises the MSE.
Textbook denotes the forecast xn+m, but the superscript n is often confused with a power so X ̃n+m|n is used here.
127 / 138

ARMA Models Forecasting
Forecasting with an AR(1) model
Let’s motivate the concepts using an AR(1) model
E[Xn+1|X1:n]
= E[Wn+1|X1:n] + θ1E[Wn|X1:n]
= θ1Wn,
E[Xn+2|X1:n]
= E[Wn+2|X1:n] + θE[Wn+1|X1:n]
= 0.
136 / 138

ARMA Models Forecasting
MSPE for an MA(1) model
Mean square prediction error for MA(1) is
Var (X ̃n+1|n ) =
= E[(Wn+1 + θ1Wn − θ1Wn)2|X1:n]
E[(Xn+1 − X ̃n+1|n)2|X1:n] = σ W2 ;
Var (X ̃n+2|n ) =
= E[(Wn+2 + θ1Wn+1 − 0)2|X1:n]
E[(Xn+2 − X ̃n+2|n)2|X1:n]
= σW2 (1+θ12);
V a r ( X ̃ n + m | n ) =
= σW2 (1+θ12).
E[(Xn+m − X ̃n+m|n)2|X1:n]
The MSPE reverts to the MA(1) process variance beyond the order q = 1.
137 / 138

ARMA Models Forecasting
Forecast for ARMA(p,q) Models
The concepts can be extended to the general AR(p) or
MA(q).
In either case, the sample X1:n = {X1, X2, . . . , Xn} must be
large compared to p and q.
The general ARMA(p, q) can be represented as an MA(∞) or
an AR(∞) and the same arguments as above apply. However, there is only a finite sample so we cannot evaluate
the required infinite sums.
A practical workaround is to truncate the infinite sum, which often works well provided n is large as the coefficients in the infinite sum decay to zero.
The truncation concept is covered in the textbook Shumway and Stoffer.
138 / 138