Let X = (X_1 \ldots X_n)^T be an n-dimensional vector of random variables.
For all x = (x_1, \ldots, x_n) \in \mathbb{R}^n, the joint cumulative distribution function of X satisfies
F_{X_i}(x_i) = F_X (\infty, \ldots, \infty, x_i, \infty, \ldots, \infty)

Clearly it is straightforward to generalise the previous definition to join marginal distributions. For example, the join marginal distribution of X_i and X_j satisfies
F_X(x_1, \ldots, x_n) = \int_{-\infty}^{x_1} \ldots \int_{-\infty}^{x_n} f_x (u_1, \ldots, u_n) du_1 \ldots du_n

If X_1 = (X_1 \ldots X_k)^T and X_2 = (X_{k+1} \ldots X_n)^T is a partition of X then the conditional CDF of X_2 given X_1 satisfies
F_{X_2|X_1} (X_2|X_1) = P(X_2 \le x_2 | X_1 = x_1).    If X has a PDF, latex f_X (\bullet)$, then the conditional PDF of X_2 given X_1 satisfies
f_{X_2 | X_1} (X_2 | X_1) = \frac{f_X (X)}{f_{X_1}(X_1)} = \frac{f_{X_1 | x_2}(X_1 | X_2) f_{X_2}(X_2)}{f_{X_1}(X_1)}

and the conditional CDF is then given by
F_{X_2 | X_1}(X_2 |X_1) = \int_{- \infty}^{x_{k+1}} \ldots \int_{- \infty}^{x_n} \frac{f_X (x_1, \ldots , x_k, u_{k+1}, \ldots , u_n)}{f_{X_1}(X_1)} du_{k+1} \dots du_n
where f_{X_1}(\bullet) is the joint marginal PDF of X_1 which is given by
f_{X_1} (x_1, \ldots , x_k) = \int_{- \infty}^{\infty} \ldots \int_{- \infty}^{\infty} f_X (x_1, \ldots , x_K u_{k+1}, \ldots , u_n) du_{k+1} \ldots du_n

We next look at independence, which is something H2 Mathematics can easily relate to.

Here, we say the collection X is independent if joint CDF can be factored into the product of the marginal CDFs so that
F_X (x_1 \ldots , x_n) = F_{X_1} (x_1) \ldots F_{X_n}(x_n)

If X has a PDF, f_X(\bullet) then independence implies that the PDF also factories into the product of marginal PDFs so that
f_X(x) = f_{X_1} (x_1) \ldots f_{X_n}(x_n).

Using the above results, we have that if X_1 and X_2 are independent then
f_{x_2|x_1}(x_2 | x_1) = \frac{f_X (X)}{f_{X_1}(X_1)} = \frac{f_{X_1}(X_1) f_{X_2}(X_2)}{f_{X_1}(X_1)} = f_{X_2}(X_2)
The above results tell us that having information about X_1 tells us nothing about X_2.

Let’s continue to look further on the implications of independence.

Let X and Y be independent random variables. Then for any events A and B, P(X \in A, Y \in B) = P(X \in A) P(Y \in B)

We can check this with,
P(X \in A, Y \in B)
= \mathbb{E}[1_{X \in A} 1_{Y \in B}]
= P(X \in A) P(Y \in B)

In general, if X_1, \ldots, X_n are independent random variables then
\mathbb{E}[f_1 (X_1) f_2(X_2) \ldots f_n(X_n)] = \mathbb{E}[f_1(X_1)] \mathbb{E}[f_2(X_2)] \ldots \mathbb{E}[f_n(X_n)]

Moreover, random variables can also be conditionally independent. For example, we say that X and Y are conditionally independent given Z if
\mathbb{E}[f(X)g(Y)|Z] = \mathbb{E}[f(X)|Z]\mathbb{E}[g(Y)|Z].
The above will be used in the Gaussian copula model for pricing of collateralised debt obligation (CDO).

Source: www.turingfinance.com
Source: www.turingfinance.com

We let D_i be the event that the i^{th} bond in a portfolio defaults. Not reasonable to assume that the D_i‘s are independent though they are conditionally independent given Z so P(D_1, \ldots, D_n |Z ) = P(D_1|Z) \ldots P(D_n|Z) is often easy to compute.

Lastly, we consider the mean and covariance.
The mean vector of X is given by \mathbb{E}[X]:=(\mathbb{E}[X_1] \ldots \mathbb{E}[X_n])^T
and the covariance matrix of X satisfies
\sum := \mathrm{Cov}(X) := \mathbb{E}[(X- \mathbb{E}[X])(X - \mathbb{E}[X])^T] so the (i,j)^{th} element of \sum is simply the covariance of X_i and X_j.
The covariance matrix is symmetric and its diagonal elements satisfy \sum_{i, i} \ge 0 and is also positive semi definite so that x^T \sum x \ge 0 for all x \in \mathbb{R}^n
Then the correlation matrix \rho (X) has (i, j)^{th} element \rho_{ij} := \mathrm{Corr}(X_i, X_j). This is also symmetric, postiie semi-definite and has 1’s along the diagonal.

For any matrix A \in \mathbb{R}^{k \times n} and vector a \in \mathbb{R}^k,
\mathbb{E}[AX +a] = A \mathbb{E} [X] + a (distributes linearly)
\mathrm{Cov}(AX +a) = A \mathrm{Cov}(X) A^T
\Rightarrow \mathrm{Var} (aX + bY) = a^2 \mathrm{Var}(X) + b^2 \mathrm{Var}(Y) + 2ab \mathrm{Cov}(X, Y)
Recall that if X and Y are independent, then \mathrm{Cov}(X, Y)=0, and the converse is not true in general.

One Comment

Leave a Reply