Multivariate Distributions

Let X = (X_1 \ldots X_n)^T be an n-dimensional vector of random variables.
For all x = (x_1, \ldots, x_n) \in \mathbb{R}^n, the joint cumulative distribution function of X satisfies
F_{X_i}(x_i) = F_X (\infty, \ldots, \infty, x_i, \infty, \ldots, \infty)

Clearly it is straightforward to generalise the previous definition to join marginal distributions. For example, the join marginal distribution of X_i and X_j satisfies
F_X(x_1, \ldots, x_n) = \int_{-\infty}^{x_1} \ldots \int_{-\infty}^{x_n} f_x (u_1, \ldots, u_n) du_1 \ldots du_n

If X_1 = (X_1 \ldots X_k)^T and X_2 = (X_{k+1} \ldots X_n)^T is a partition of X then the conditional CDF of X_2 given X_1 satisfies
F_{X_2|X_1} (X_2|X_1) = P(X_2 \le x_2 | X_1 = x_1).  If X has a PDF,latex f_X (\bullet), then the conditional PDF oflatex X_2givenlatex X_1satisfieslatex f_{X_2 | X_1} (X_2 | X_1) = \frac{f_X (X)}{f_{X_1}(X_1)} = \frac{f_{X_1 | x_2}(X_1 | X_2) f_{X_2}(X_2)}{f_{X_1}(X_1)}and the conditional CDF is then given bylatex F_{X_2 | X_1}(X_2 |X_1) = \int_{- \infty}^{x_{k+1}} \ldots \int_{- \infty}^{x_n} \frac{f_X (x_1, \ldots , x_k, u_{k+1}, \ldots , u_n)}{f_{X_1}(X_1)} du_{k+1} \dots du_nwherelatex f_{X_1}(\bullet)is the joint marginal PDF oflatex X_1which is given bylatex f_{X_1} (x_1, \ldots , x_k) = \int_{- \infty}^{\infty} \ldots \int_{- \infty}^{\infty} f_X (x_1, \ldots , x_K u_{k+1}, \ldots , u_n) du_{k+1} \ldots du_nWe next look at independence, which is something H2 Mathematics can easily relate to.   Here, we say the collection X is independent if joint CDF can be factored into the product of the marginal CDFs so thatlatex F_X (x_1 \ldots , x_n) = F_{X_1} (x_1) \ldots F_{X_n}(x_n)If X has a PDF,latex f_X(\bullet)then independence implies that the PDF also factories into the product of marginal PDFs so thatLatex f_X(x) = f_{X_1} (x_1) \ldots f_{X_n}(x_n).  Using the above results, we have that iflatex X_1andlatex X_2are independent thenlatex f_{x_2|x_1}(x_2 | x_1) = \frac{f_X (X)}{f_{X_1}(X_1)} = \frac{f_{X_1}(X_1) f_{X_2}(X_2)}{f_{X_1}(X_1)} = f_{X_2}(X_2)The above results tell us that having information aboutlatex X_1tells us nothing aboutlatex X_2.  Let's continue to look further on the implications of independence.   Let X and Y be independent random variables. Then for any events A and B,latex P(X \in A, Y \in B) = P(X \in A) P(Y \in B)We can check this with,latex P(X \in A, Y \in B)latex = \mathbb{E}[1_{X \in A} 1_{Y \in B}]latex = P(X \in A) P(Y \in B)In general, iflatex X_1, \ldots, X_nare independent random variables thenlatex \mathbb{E}[f_1 (X_1) f_2(X_2) \ldots f_n(X_n)] = \mathbb{E}[f_1(X_1)] \mathbb{E}[f_2(X_2)] \ldots \mathbb{E}[f_n(X_n)]Moreover, random variables can also be conditionally independent. For example, we say that X and Y are conditionally independent given Z iflatex \mathbb{E}[f(X)g(Y)|Z] = \mathbb{E}[f(X)|Z]\mathbb{E}[g(Y)|Z]. The above will be used in the Gaussian copula model for pricing of collateralised debt obligation (CDO).  [caption id="attachment_2829" align="alignnone" width="300"]<a href="http://theculture.sg/wp-content/uploads/2016/01/620x346xMultivariate_Gaussian_Fixed-1024x572.png.pagespeed.ic_.Tx7wEea-aO.png" rel="attachment wp-att-2829"><img src="http://theculture.sg/wp-content/uploads/2016/01/620x346xMultivariate_Gaussian_Fixed-1024x572.png.pagespeed.ic_.Tx7wEea-aO-300x168.png" alt="Source: www.turingfinance.com" width="300" height="168" class="size-medium wp-image-2829" /></a> Source: www.turingfinance.com[/caption]  We letlatex D_ibe the event that thelatex i^{th}bond in a portfolio defaults. Not reasonable to assume that thelatex D_i's are independent though they are conditionally independent given Z solatex P(D_1, \ldots, D_n |Z ) = P(D_1|Z) \ldots P(D_n|Z)is often easy to compute.  Lastly, we consider the mean and covariance. The mean vector of X is given byLatex \mathbb{E}[X]:=(\mathbb{E}[X_1] \ldots \mathbb{E}[X_n])^Tand the covariance matrix of X satisfieslatex \sum := \mathrm{Cov}(X) := \mathbb{E}[(X- \mathbb{E}[X])(X – \mathbb{E}[X])^T]so thelatex (i,j)^{th}element oflatex \sumis simply the covariance ofLatex X_iandlatex X_j. The covariance matrix is symmetric and its diagonal elements satisfylatex \sum_{i, i} \ge 0and is also positive semi definite so thatlatex x^T \sum x \ge 0for alllatex x \in \mathbb{R}^nThen the correlation matrixlatex \rho (X)haslatex (i, j)^{th}elementlatex \rho_{ij} := \mathrm{Corr}(X_i, X_j). This is also symmetric, postiie semi-definite and has 1's along the diagonal.  For any matrixlatex A \in \mathbb{R}^{k \times n}and vectorlatex a \in \mathbb{R}^k,Latex \mathbb{E}[AX +a] = A \mathbb{E} [X] + a(distributes linearly)latex \mathrm{Cov}(AX +a) = A \mathrm{Cov}(X) A^Tlatex \Rightarrow \mathrm{Var} (aX + bY) = a^2 \mathrm{Var}(X) + b^2 \mathrm{Var}(Y) + 2ab \mathrm{Cov}(X, Y)Recall that if X and Y are independent, thenlatex \mathrm{Cov}(X, Y)=0$, and the converse is not true in general.

Comments
    pingbacks / trackbacks

    Leave a Reply to Review of Basic Probability – The Culture Cancel reply

    Contact Us

    CONTACT US We would love to hear from you. Contact us, or simply hit our personal page for more contact information

    Not readable? Change text. captcha txt
    0

    Start typing and press Enter to search