0

I understand that in simulation, inverse transform sampling means that I first generate a uniformly distributed value $u$ and then use $F^{-1}(u)$ to get a value according to the distribution $F$. If I need a sample size of 1000, then I repeat the above process 1000 times (i.e., 1000 iterations). I am wondering whether the following approach works.

Instead of generating uniformly distributed random values in the 1000 iterations, I get 1000 values equally spaced between 0 and 1, i.e., $0,0.001,0.002,...,0.999,1$ and use each of these values in each iteration. In this case, I get the most "representative" uniformly distributed value and the corresponding $F^{-1}()$ of these values has a histogram that perfectly matches the theoretical PDF. Is this method valid? If not, what are the issues? Thanks.

Justin
  • 67
  • 1
    This amounts to a numeric integration but not to a Monte Carlo simulation. Quasi-random numbers such as Sobol sequences go that route. – Kurt G. Mar 09 '22 at 09:21
  • @Kurt G. Thanks. Are there any issues by using this approach, compared with randomly generated values? – Justin Mar 09 '22 at 21:38
  • If the numeric problem you want to solve is to just calculate the expectation of a function of a one-dimensional rv $\mathbb E[f(X)]$ there are no issues at all. In higher dimensions it can become complicated very quickly. The pros and cons of pseudo vs quasi random numbers are well documented. You will find out more by googling than I could ever tell you from the top of my head. – Kurt G. Mar 10 '22 at 07:24
  • Thanks @Kurt G. – Justin Mar 11 '22 at 15:21

0 Answers0