By: Omid Saremi, Ph.D

Not all left behind from the WWII was destruction and despair; it gave birth to the first electronic programmable computer. Nicknamed the “Giant Brain”, the Electronic Numerical Integrator and Computer (ENIAC) was born at UPenn under the supervision of Dr. John Mauchly an American physicist and Presper Eckert an Electrical Engineer. The year was 1946.

Before the advent of the first electronic computer, most statistical techniques reliant on generating a large set of random numbers were considered intractable. In light of those early developments in computing, a physicist intrigued by “random phenomena” and an avid poker player, Stan Ulam realized that the question of tractability of such techniques could be revisited. Harnessing the new computing capacity, he argued one should be able to estimate distribution function of quantities, which depend on (often many) random input parameters. The famed physicist John Von Neumann quickly realized the importance of Ulam ‘s line of thought and helped further develop/popularize the core idea. Nicolas Metropolis named the project “Monte Carlo”; he was inspired by Ulam’ s uncle who gambled in Monte Carlo, Monaco (random numbers, gambling? See the connection?)

The following scenario ubiquitous in data science (from risk assessment, portfolio management to scenario based forecasting) helps us understand Monte Carlo. You have a deterministic rule, which calculates an output based on an input. If the input variables are themselves random, each drawn out of known distributions, the output will be a random variable too. What would be the distribution of the output? This problem is usually analytically intractable and no useful closed-form mathematical formula describing the output exists except for simple output functions.

You do not need a close-form for the output distribution function however, if you have a powerful computer and an algorithm for generating good quality pseudo-random numbers! A computer can “generate” the output distribution function “experimentally” by drawing pseudo-random numbers and feeding them to the rule as input. Von Neumann himself had (re-) invented a simple method (called the “middle-square”) for generating (finite) sequences of uniformly distributed pseudo-random numbers. Monte Carlo algorithm was one of the first algorithms to be programmed on ENIAC.

One interesting instance of the above class of problems is forecasting based on historical time series data. Consider a time-dependent process in a given initial state. Suppose during each time step forward a decision about the future state of the system is made (according to a predetermined probability density). What would be the forecast for the future state? Monte Carlo method will not only forecast the final state (by averaging over the outcome of many possible histories of the system) giving a “point estimate” but also determines the entire distribution of the final state. Point estimates are popular but not meaningful. For one thing, they vary if another instance of the same data (obtained by re-measuring the data experimentally or sampling the existing data set) is used. One needs a methodology to quantify the “variability” in predictions. Monte Carlo method offers a systematic way and general framework to achieve this goal. Monte Carlo method is also used in sampling “difficult” multivariate distribution functions appearing in Bayesian inference and machine learning applications. Another variant of the method shows up in the form of a heuristic search algorithm suitable when search space has high number of dimensions.

The mission of DecisionNext platform is to help make better data-driven high-value decisions. In particular, scenario based forecasting and Bayesian inference sit at the heart of its data products. The family of Monte Carlo methods is certainly an invaluable data science tool in this journey.