All A-level students should know that when you have a non-normal distribution and a sufficiently large sample size n, we can approximate it to a normal distribution.

X \sim N(\mu , {\sigma}^{2})
\bar{X} \sim N(\mu , \frac{{\sigma}^{2}}{n})

So what happens here? Why is such an approximation even legit or allowed? It sounds so loose.

This link contains a simple simulation that allows students to observe what happens when n gets larger. And if you play with it a bit, you may realise that the distribution will tend to that of a normal distribution. This makes some intuitive sense, if we actually are capable of getting more and more n, our distribution should get more and more accurate.

One Comment

Leave a Reply