Monte Carlo methods
183
184
CHAPTER 9. MONTE CARLO METHODS
Monte Carlo means using random numbers in scientific computing. More precisely, it means using random numbers as a tool to compute something that is not random. For example1 , let X be a random variable and write its expected value as A = E[X]. If we can generate X1 , . . . , Xn , n independent random variables with the same distribution, then we can make the approximation
A ≈ An =
1 n n
Xk . k=1 The strong law of large numbers states that An → A as n → ∞. The Xk and
An are random and (depending on the seed, see Section 9.2) could be different each time we run the program. Still, the target number, A, is not random.
We emphasize this point by distinguishing between Monte Carlo and simulation. Simulation means producing random variables with a certain distribution just to look at them. For example, we might have a model of a random process that produces clouds. We could simulate the model to generate cloud pictures, either out of scientific interest or for computer graphics. As soon as we start asking quantitative questions about, say, the average size of a cloud or the probability that it will rain, we move from pure simulation to Monte Carlo.
The reason for this distinction is that there may be other ways to define A that make it easier to estimate. This process is called variance reduction, since most of the error in A is statistical. Reducing the variance of A reduces the statistical error.
We often have a choice between Monte Carlo and deterministic methods.
For example, if X is a one dimensional random variable with probability density f (x), we can estimate E[X] using a panel integration method, see Section 3.4.
This probably would be more accurate than Monte Carlo because the Monte
√
Carlo error is roughly proportional to 1/ n for large n, which gives it order of accuracy roughly 1 . The worst panel method given in Section 3.4 is first order