ST227 Survival Models-Part II Continuous Time Markov Chains

George Tzougas Department of Statistics, London School of Economics

December 29, 2018

Copyright By cscodehelp代写 加微信 cscodehelp

1 Stochastic Processes 1.1 Introduction

A stochastic process is a model for a time-dependent random phenom- enon.

Thus, just as a single random variable describes a static random phe- nomenon, a stochastic process is a collection of random variables, Y(t) = Yt, one for each time t in some set J.

The process is denoted fYt : t 2 Jg. The set of values that the random variables Yt can take is called the state space of the process, S.

The Örst choice that one faces when selecting a stochastic process to model a real-life situation is that of the nature (discrete or continuous) of the time set J and of the state space S.

Example 1: Discrete state space with discrete time changes

A motor insurance company reviews the status of its customers yearly. Three levels of discount are possible (0; 25%; 40%) depending on the ac- cident record of the driver.

In this case the appropriate state space is S = f0; 25; 40g and the time set is J = f0; 1; 2; :::g where each interval represents a year.

Example 2: Discrete state space with continuous time changes

A life insurance company classiÖes its policyholders as healthy, ill or dead.

Hence the state space S = fh; i; dg.

As for the time set, it is natural to take J = [0; 1) as illness or death can occur at any time. This problem is studied in some detail in what follows ( Continuous Time Markov Chains).

Example 3: Continuous state space

Claims of unpredictable amounts reach an automobile insurance company at unpredictable times; the company needs to forecast the cumulative claim over [0;t] in order to assess the risk that it might not be able to meet its liabilities.

It is standard practice to use [0; 1) both for S and J in this problem.

It is important to be able to conceptualise the nature of the state space of any process which is to be analysed, and to establish whether it is most usefully modelled using a discrete, a continuous, or a mixed time domain.

Usually the choice of state space will be clear from the nature of the process being studied (as, for example, with the healthy-ill-dead model), but whether a continuous or discrete time set is used will often de- pend on the speciÖc aspects of the process which are of interest, and upon practical issues like the time points for which data are available.

1.1.1 The Markov property

In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov.

A stochastic process fY (t)gt0 is a Markov process if the conditional probability distribution of future states depends only on the previous state.

For example, if s 0 and we consider the states i and j; then we can say that Y (t) has the Markov Property if P(Y (t + s) = jjY (t) = i) does not depend on any information before t.

Hence, the future development of Y (t) can be predicted from its present state alone, without any reference to its history.

2 Continuous Time Markov Chains 2.1 Introduction

Up till now our approach has been to specify a future lifetime in terms of a random variable Tx. In this chapter we look at things rather di§erently, and use a Continuous Time Markov Chain (or a Markov model) of transfers between states.

In the simplest case, a life can be alive or dead, and this gives a two-state model of mortality which is known as the dead-or-alive model. We often represent this in a diagram like:

The probability that a life alive at any age x should be dead at some future age is determined by an age dependent force of mortality x+t, t 0, or transition intensity.

The main advantages of the Markov model approach to modelling mortality over the random variable approach are that:

ñ it can be generalised to multiple decrements or multiple state models, e.g. the three state model fW ell; Ill; Deadg, and

ñ it deals easily with censoring, a common feature of mortality data, which we will talk about later on in the chapters ahead.

We make two fundamental assumptions in the 2-state model above.

1. (AS1) Markov assumption. The probability that a life now aged x will be in either state fAlive; Deadg at any future time x + t depends only on the age x and the state currently occupied.

2. (AS2) The probability dtqx+t is given by

dtqx+t = x+tdt + 0(dt); t 0: (1)

Informally, assumption 1 says that the future depends only on the present and not on the past, while assumption 2 says that the probability of death in a short interval of time, dt; is approximately proportional to the force of mortality at that time.

Reminder: The function f(h) is said to be o(h) or “little o of h” if

lim f(h) = 0: (2)

In other words, f(h) is o(h) if f(h) ! 0 faster than h.

2.2 Computation of tpx in the dead-or-alive model

Our model is deÖned in terms of the transition intensities x. How can we

compute probabilities like tpx; tqx, etc? Lemma 1 Assumptions 1 and 2 imply that

tpx = exp x+sds : (3)

. This agrees with the well known result obtained with the future lifetime approach in Part 1.

Proof: Let s < t. Notice Örst that spx is well deÖned by the Markov property, i.e. how the life got to age x is irrelevant. We consider the small interval of time immediately after x + s, and ask: how can a life aged x become a life aged x + s + ds? The diagram may help:
Hence, the probability that (x) survives s + ds years is equal to the prob- ability that they survive s years times the probability that, when at age x + s; they survive ds years.
= spx (1 x+sds + 0(ds)) by Assumption 2. (4)
Now bring the term spx from the right to the left hand hand side of (4) we get s+dspx s px = spxx+sds + 0(ds): (5)
Then divide both sides of (5) by ds and let ds ! 0: We Önd
spx dspx+s
= spx (1 ds qx+s)
s+dspx s px = spxx+s + 0(ds) ds ds
(6) @s s x s x x+s
) tpx = exp( x+sds)
on integrating (6) from 0 to t, this result will be proved in the class.
2.3 Multi-state Markov models
The 2-state model of mortality can be extended to any number of states. Many insurance products, e.g. Permanent Health Insurance (PHI), can be modelled by a multi-state model. The set S = fHealthy; Ill; Deadg = fh; i; dg is the state space. Here is a 3-state model for PHI:
Let g and h be any two states. We extend the assumptions for the 2-
state model to cope with a multi-state model. We do this in terms of the
transition probability pgh (analogous to p ) and the force of transition tx tx
(aka transition intensity) gh (analogous to ). xx
We deÖne the transition probability
tpgh = Pr(In state j at time x + t j In state i at time x) (7)
for any two states i and j.
Also, for z > 0 deÖne the force of transition pgh

gh = lim z x : (8) x z!0+ z

2.4 Fundamental Assumptions for Multi-State models

1. (AS1) Markov assumption. The probability that a life now aged x will be in a particular state at any future time x + t depends only on the age x and the state currently occupied.

2. (AS2) For any two distinct states g and h the transition probability dtpgh is given by

dtpgh =gh dt+0(dt); t0: (9) x+t x+t

3. (AS3) The probability that a life makes two or more transitions in time dt is o(dt). Assumption 3 says, in e§ect, that only one transfer can

take place at one time.

What does pgg mean? tx

This is the probability that we are in state g at time x + t, given that we are in state g at time x.

This does not imply that we have been in state g for the whole of the time interval from x to x + t, for this we deÖne the occupation probability.

Occupation Probability:

tpgg = Pr(In state g from x to x + t j In state g at time x) (10) x

Note that pgg pgg. tx tx

Because pgg is the occupation probability, i.e. the individual never leaves tx

state g between ages x and x + t.

The important distinction is that pgg includes the possibility that the

individual leaves state g between ages x and x + t, provided they back in

state g at age x + t. This result will be shown in the next class.

However pgg will be equal to pgg in one common situation, namely when

tx tx return to state g is impossible.

For example, in this model of terminal illness

we have pww= pgg since return to the well state is impossible.

In a similar fashion, we also have tpii =t pii . xx

2.5 Kolmogorov forward equations

What can we say about the relationship between the transition inten-

sities gh , g 6= h and the transition probabilities pgh? x+t tx

We will look at two examples in detail before giving the general result. Example 1 Consider the 3-state model for working, retiring and dying.

In this simple example we will assume two constant transition intensities (from working to dying) and (from working to retiring).

There are three transition probabilities which correspond to the events: (a) Working to Dead, (b) Working to Retired and (c) Working to Working.

(a) Working to Dead or tpwd . We use a standard method in all of these x

kind of problems:

Step 1: We suppose we are in the destination state (here Dead) at

time x+t+dt.

Step 2: We list the states we could be in at time x+t, i.e. just before

time x+t+dt.

The diagram might help:

The left end represents the starting position at time x.

The right end represents the Önal position at time x + t + dt.

The middle position lists the states that can be occupied at x + t im- mediately before the Önal position at x + t + dt.

Thus, we have that

pwd = tpwd +t pww pwd (11)

= tpwd +t pww (dt + 0(dt)): (12) xx

t+dt x x x dt x+t

Rearranging we get

pwd tpwd o(dt)

t+dt x x =tpww+ (13)

dt x dt so taking the limit dt ! 0 we get

@ pwd= pww: (14) @t t x t x

This is the Kolmogorov forward equation for tpwd: x

(b) Working to Retired or tpwr . We can apply exactly the same argument x

to tpwr but it is better to use the symmetry of the diagram. This gives x

the Kolmogorov forward equation for tpwras x

@ pwr= pww: (15) @t t x t x

(c) Working to Working. First, notice that return to the Working state is

impossible so tpww=tpww . The diagram of possible routes is very simple: xx

The probability of transfer out of state Working in time dt is

pwd + pwr = dt+0(dt)+ dt+0(dt) (16) dt x+t dt x+t

= dt+ dt+0(dt): (17)

Hence, the probability we remain in state Working for time dt is

Thus, since

1dt dt+0(dt): (18)

pww = tpww(1dt dt+0(dt)): (19) t+dt x x

Rearranging and letting dt ! 0, we Önd

@ pww=(+v)pww (20)

and putting tpww = tpww gives the Kolmogorov equation for tpww : xxx

@ pww=(+v)pww: (21) @t t x t x

@t t x t x

We now have a system of three di§erential equations (14), (15) and (21) for the three unknown transition probabilities, tpww;t pwr and tpwd .

Note that

since a life in state w at time x must be in some state at time x+t.

tpww +t pwd +t pwr = 1 (22) xxx

Example 2 For our second example we return to the 3-state model for PHI.

We have assumed that the transition intensities ; ; and are constant.

There are six transition probabilities tphh;t phi;t phd; tpii;t pih and tpid . xxxxxx

We look in detail at the derivation of the Kolmogorov equations for three of these, tphh , tphi and tphd . (The remaining three equations can then

be written down by using symmetry arguments.)

(a) Healthy to Ill or tphi . The Ill state at time x+t+dt can be reached

from either the Healthy or the Ill state at time x + t. Our diagram is

phi = tphh (dt+0(dt))+ tphi (1dtdt+0(dt)) (23) t+dt x x x

Rearranging we get

phi t phi 0(dt)

t+dt x x =tphh(+)tphi+ (24)

and taking the limit dt ! 0 gives the Kolmogorov forward equation for tphi x

(next slide).

@ phi= phh(+)phi: (25) @t t x t x t x

(b) Healthy to Healthy or tphh . The Healthy state at time x + t + dt can x

be reached from either the Healthy or the Illstate at time x + t. Our diagram is

phh = tphh (1dtdt+0(dt))+ tphi (dt+0(dt)) (26) t+dt x x x

Rearranging we get

phh tphh 0(dt)

t+dt x x = tphi(+)tphh+ (27)

and taking the limit dt ! 0 gives the Kolmogorov forward equation for tphh x

(next slide).

@ phh= phi(+)phh (28) @t t x t x t x

(c) Healthy to Dead or tphd . The Dead state at time x+t+dt can be x

reached from either the Healthy, the Ill or the Dead state at time x + t. Our diagram is

phd = tphh (dt+0(dt))+ tphi (dt+0(dt))+ tphd 1 (29) t+dt x x x x

The Kolmogorov forward equation for phd follows by rearranging and taking x

the limit dt ! 0. We Önd

@ phd= phh+ phi: (30)

Comment: Note that

@t t x t x t x

tphd = 1 tphh tphi: (31) xxx

2.6 The general Kolmogorov equations

What can we say about the relationship between the transition intensities gh , g 6= h and the transition probabilities pgh ?

The general Kolmogorov equations generalise the previous two exam-

ples. We show that

@ pgh= P pgjjh pghhj ; g6=h (32)

We are interested in transfers from state g at time x to state h at time x+t+dt. So at time x+t we are already in state h, or we have still to reach state h from some other state j.

@ttx j6=htxx+t txx+t

Our diagram is

Hence we have that (next slide):

pgh = tpgh phh + P tpgj pjh (33) t+dt x x dt x+t x dt x+t

= tpgh 1 P hj dt+0(dt) + P tpgj hj dt+0(dt)

x j6=h x+t j6=h x x+t Rearranging we get

pgh pgh P

t+dtx tx= (pgjjhtpghhj)+

dt j6=h x x+t x x+t t

and the result follows on letting dt ! 0.

We can also apply the same argument to Önding the Kolmogorov equation for pgg , the probability that the state g is occupied continuously from

time x to time x+t.

As in the previous examples, the resulting di§erential equation can be

solved to give a closed form expression for pgg. The diagram tx

tells us that (next slide):

pgg = tpgg 1 P gj dt+0(dt)! (35) t+dt x x j6=g x+t

Rearranging we get

pgg pgg P 0(dt)

t+dtx tx =tpgg gj+

dt x j6=g x+t dt

on letting dt ! 0.

) @ pgg=pggPgj (36) @t t x t x j6=g x+t

Integrating (36) we Önd

tpgg = exp

Rt P gj ds (37)

0j6=g x+s Comment: This formula generalises the well-known formula:

tpx = exp x+sds : (38)

程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com