CS代考程序代写 University of California, Los Angeles Department of Statistics

University of California, Los Angeles Department of Statistics
Statistics 100B Instructor: Nicolas Christou Functions of random variables
Functions of one random variable
a. Method of cdf:
Let X ∼ Γ(α, β). Find the distribution of Y = cX, c > 0. With the method of cdf we begin with the cdf of Y as follows.
FY(y) FY(y)
FY(y) FY (y)
= P(Y≤y)
= P(cX≤y)
= P(X≤y) c
= FX(y) Now differentiate on both sides w.r.t. y c
fY(y) fY(y) =
1(y)α−1e− y
c Γ(α)βα
yα−1e− y βc
= 1fX(y) cc
fY(y) = Therefore, Y ∼ Γ(α, cβ).

b. Method of transformations
It is originated from the method of cdf. In general, to find the pdf of a function of a random variable we use the following theorem.
Let X be a continuous random variable with pdf f(x). Let Y = g(X), either increasing or decreasing. Then the pdf of Y is given by
􏰛􏰛d 􏰛􏰛 fY (y) = fX [w(y)] 􏰛􏰛 dy w(y)􏰛􏰛 ,
where w(y) is the inverse function of g (the value of x such that g(x) = y). We can also use the following notation, by defining g−1(y) as the value of x such that g(x) = y.
􏰛􏰛d 􏰛􏰛 fY (y) = fX [g−1(y)] 􏰛􏰛 dy g−1(y)􏰛􏰛 ,
Apply the theorem to the example above:
Y= cX, here g(X) = cX, and therefore w(y) = g−1(y) = y.
􏰛􏰛d 􏰛􏰛 fY(y)= fX [w(y)] 􏰛􏰛􏰛 dy w(y)􏰛􏰛􏰛
fY(y)= fX(y)1 cc
yα−1e− y βc
fY(y) =
c. Method of MGF
Using the uniqueness theorem. Let X ∼ Γ(α,β). Find the distribution of Y = cX, c > 0. Then
MY (t) = MX(ct) = (1 − βt)−α. Therefore, Y ∼ Γ(α, cβ).

Joint probability distribution of functions of random variables
We can extend the idea of the distribution of a function of a random variable to bivariate and multivariate random vectors as follows.
Let X1,X2 be jointly continuous random variables with pdf fX1X2(x1,x2). Suppose Y1 = g1(X1,X2) and Y2 = g2(X1,X2). We want to find the joint pdf of Y1,Y2. We follow this procedure:
1. Solve the equations y1 = g1(x1, x2) and y2 = g2(x1, x2) for x1 and x2 in terms of y1 and y2 to get x1 = h1(y1, y2) and x2 = h2(y1, y2).
To find the joint pdf of Y1,Y2 use the following result: fY1,Y2(y1,y2) = fX1,X2(x1,x2)|J|−1, where |J| is the absolute value of the Jacobian. Here, x1,x2 are the expressions obtained from step (1) above, x1 = h1(y1, y2) and x2 = h2(y1, y2).
Example 1
Let X1 and X2 be independent exponential random variables with parameters λ1 and λ2 respectively. Find the joint probability density function of X1 + X2 and X1 − X2.
Since X1 and X2 are independent the joint pdf of X1 and X2 is
fX1,X2 (x1, x2) = fX1 (x1)fX2 (x2) = λ1e−λ1×1 λ2e−λ2×2
LetU=X1+X2andV=X1−X2.Wesolveforx1andx2togetx1=u+v andx2=u−v. 22
􏰛∂g1 ∂g1􏰛 􏰛∂x1 ∂x2􏰛
2. Compute the Jacobian: J = 􏰛􏰛 ∂g2 ∂g2 􏰛􏰛. (J is the determinant of the matrix of partial 􏰛∂x1 ∂x2􏰛
􏰛􏰛∂u ∂u􏰛􏰛 􏰛􏰛1 1􏰛􏰛 ∂x1 ∂x2
WecomputenowtheJacobian: J=􏰛􏰛 ∂v ∂v 􏰛􏰛=􏰛􏰛 􏰛􏰛=−2. 􏰛∂x1 ∂x2􏰛􏰛1−1􏰛
Finally, we find the joint pdf of U and V :
f (u,v)=λe−λ1u+vλe−λ2u−v ×1=λ1λ2e−λ1u+v−λ2u−v
U,V 1 2 2 2 2 2 22

Example 2
Suppose X and Y are independent random variables with X ∼ Γ(α1, β) and Y ∼ Γ(α2, β).
ComputethejointpdfofU=X+YandV= X andfindthedistributionofUandthe X+Y
distribution of V . Also show that U, V are independent.
A random variable X is said to have a gamma distribution with parameters α,β if its probability density function is given by
xα−1e− x β
f(x)= Γ(α)βα , α,β>0,x≥0.
Here X ∼ Γ(α1, β) and Y ∼ Γ(α2, β), therefore,
xα1−1e− x yα2−1e− y ββ
fX (x) = Γ(α1)βα1 , and fY (y) = Γ(α2)βα2
Because X, Y are independent, the joint pdf of X and Y is the product of the two marginal
xα1−1e− x yα2−1e− y xα1−1yα2−1e− x+y βββ
fXY (x, y) = fX (x)fY (y) = Γ(α1)βα1 Γ(α2)βα2 = Γ(α1)Γ(α2)βα1+α2 . Now follow the two steps above:
1. Solvetheequationsu=x+yandv= x intermsofxandy. Weget: x=uvand x+y
y = u(1 − v).
2. ComputetheJacobian: J=􏰛􏰛 ∂v ∂v 􏰛􏰛=􏰛􏰛 y x 􏰛􏰛=−
􏰛􏰛∂u∂u􏰛􏰛􏰛􏰛1 1􏰛􏰛
∂x∂y 11
=− . x+y u
􏰛 ∂x ∂y 􏰛 􏰛 (x+y)2 −(x+y)2 􏰛
Finally to find the joint pdf of U,V use x = uv and y = u(1−v) in the joint pdf of X,Y:
f (u, v) = (uv)α1−1[u(1−v)]α2−1e β u , multiply by Γ(α1+α2) and rearrange to get :
UV Γ(α1)Γ(α2)βα1+α2 Γ(α1+α2) uα1+α2−1e−u vα1−1(1−v)α2−1Γ(α +α)
fUV(u,v)=Γ(α1+α2)βα1+α2 × Therefore,
uα1+α2−1e− u β
Γ(α1)Γ(α2) . vα1−1(1 − v)α2−1
fUV(u,v)= Γ(α1 +α2)βα1+α2 × where,B(α1,α2)=􏰇1vα1−1(1−v)α2−1dv=Γ(α1)Γ(α2) istheBetafunction.
B(α1,α2) , 0 Γ(α1 +α2 )
We observe that
a. U, V are independent. b. U ∼ Γ(α1 + α2, β).
c. V ∼Beta(α1,α2).

Example 3
Suppose X1, X2, X3 be independent random variables that follow Γ(αi, 1), i = 1, 2, 3 distri- bution. Let
X1 +X2 +X3
X1 +X2 +X3
Y3 =X1+X2+X3
denote 3 new random variables. Show that the joint pdf of Y1,Y2,Y3 is given by
f(y ,y ,y )= Γ(α1 +α2 +α3)yα1−1yα2−1(1−y −y )α3−1. 1 2 3 Γ(α1)Γ(α2)Γ(α3) 1 2 1 2
(Random variables that have a joint pdf of this form follow the Dirichlet distribution.)
Y1 = Y2 =

Leave a Reply

Your email address will not be published. Required fields are marked *