2

Lately I was interested by Monte-Carlo simulations. I found many papers about this approach in the Internet but for now they are too hard for me. I just want to start understanding this method with something easier.

For this reason I started to wonder how is it efficient in particular cases. So lets say that I have a game, for two players, with $n$ (very, very big) possible, equally likely, starting points. From each starting point I am able to simulate this game. We can also assume, for now, that from each starting point there is single path to the end of this game.

Now, lets distinguish particular end state of this game (could be draw or something). How many random simulations do I have to perform to know the probability of this state, when playing from random starting point, up to $2$ or $3$ or $4$ and so on, decimal places? Is it very hard to estimate? Maybe you know where I can search to satisfy my curiosity in this topic?

xan
  • 2,053

1 Answers1

2

What you're talking about is a direct sampling approach, where you can randomly choose your start point from across the whole sample space of start points. Results are independent and uncorrelated and in general your standard error is going to be inversely proportional to the square root of the number of trials in your simulation. (I.e., here you're really solving the classical statistical problem of estimating a population parameter by taking iid draws from the population.)

But Monte Carlo methods also encompass situations where you can't directly sample, and you have to explore the sample space "step by step". In this case the Metropolis algorithm and its close cousin the Metropolis Hastings algorithm is a very popular approach. Here, results that are close together tend to be correlated, and so estimating the standard error, and convergence, is harder (naive approaches underestimate the error). Nevertheless there are various heuristics / methods for doing so, including repeatedly taking the average of adjacent results until the estimate of the standard error stabilises, as well as graphical methods.

If you are really interested in this topic, I would highly recommend the coursera Statistical Mechanics: Algorithms and Computations course. This is an extremely well-presented course that covers Monte Carlo simulation in the context of statistical mechanics. It has active input from course members on the discussion forums, and uses simple Python programs to explore simulation in a practical way. Although you have missed the first assignment (and likely the second one too), the first couple of weeks' videos are a very good introduction to Monte Carlo simulation, the different kinds of sampling approaches, and how to estimate errors.

TooTone
  • 6,343
  • great answer, thank you! And knowing this about standard error, can I say something about accuracy of my result after $k$ trials in my simulation? I mean, how many digits of my calculated probability are accurate, almost certainly? – xan Feb 22 '14 at 18:44
  • 2
    :). You might construct a model for your game, then construct a confidence interval for your estimate, and interpret the confidence interval in the usual way. E.g., you could say a certain proportion of games end in a win, run a large number of trials and unless the proportion of wins or losses is too extreme, you could get a confidence interval by approximating the binomial sample with a normal distribution. A much easier approach is simply to run the simulation until you see the desired convergence, usually you say $d$ places accuracy when you only see changes in the $d+2$th digit. – TooTone Feb 22 '14 at 18:57
  • 1
    @xan Sometimes you want to guarantee a maximum relative error in your estimation: if the probability p to be estimated is small, you would also like a small error, so that their proportion is fixed. Classical theory tells you that the number of simulations is inversely proportional to p. But of course you don't know p. For these type of problems, sequential methods can be useful. Take a loop for example to this Q&A – Luis Mendo Feb 23 '14 at 16:10
  • Thanks Luis, good to know! – xan Feb 24 '14 at 23:06