Manual


Monte Carlo Tutorial:

The Monte Carlo analysis can be used to provide accurate and reliable statistics for the parameters of a model that has been fitted to an experimental dataset. For the case of UltraScan, you may have used the finite element analysis to fit a sedimentation velocity experiment to a finite element model describing the various properties of the component(s) contained in the sample.

In that case, the fitted parameters may include the sedimentation and diffusion coefficient, a nonideality parameter sigma, maybe the baseline and partial concentration of the sample, and probably the molecular weight of the component (which in this case is nothing else but another representation of the combined sedimentation and diffusion coefficients, as well as the hydrodynamic corrections).

Once you have successfully fitted your experimental data to the finite element model, you should have achieved a variance of around 1 x 10-5 or less and have residuals that scatter randomly about the mean of the fit. The problem with this approach is that the parameters obtained from the fit have a finite uncertainty value which is unknown, but necessary in order to interpret the confidence one may have about the fitted results.

The uncertainty arises from the fact that experimental data contains noise, which we assume to be random (compare the residuals!) and distributed with a Gaussian normal distribution about the mean. If we could measure the same experiment many times, we would obtain a pretty good distribution of datasets, each providing a set of best-fit parameters (when fitted with the same model) that can be used to compile good statistics on each parameter to determine the confidence level of each fitted parameter. Hopefully we would find that both the residuals will remain random about the mean (for the model used), and the standard deviation of the noise will stay the same and the distribution of noise around the mean will remain Gaussian. If these conditions are met, we can perform a Monte Carlo analysis instead of repeating our experiment 10,000 times to accurately determine the uncertainties contained in the parameter estimates from the fit. This procedure is a highly computer intensive procedure and in most cases will require a Beowulf system to be completed, where these calculations can be performed in parallel.

Here is how it works: First, you simulate a solution from your initial best-fit parameters. This fit will produce a certain variance which reflects the noise level in your data. Using a random number generator, you regenerate a different distribution of noise which has the same properties as the noise in your original data, i.e., is random, and has a Gaussian distribution with the same standard deviation (or variance) as your original dataset. Then you add this noise distribution to your best-fit solution (the fit) and you have an equivalent experimental data set that is slightly different from the original dataset, and can be refitted.

For the refitting, you simply reuse the best-fit parameters as initial estimates, since they will be reasonably close to the answers that you will obtain from the fit to the synthetic noise dataset. Then you repeat this simulation of noise a large number of times (maybe 10,000 times), refit, and collect the best-fit parameters from each iteration. Just as your random noise, these parameters will be distributed with a Gaussian distribution of a certain standard deviation. This standard deviation for each parameter will be different and can be used to estimate the 95% and 99% confidence intervals of the parameter.

As a variation on this theme, one can also bootstrap the original experimental noise onto the best-fit solution. In that case a certain percentage of original residuals from the best-fit datapoints is randomly selected and randomly permuted to replace points of the original data which is then refitted. One can also mix both methods and use a certain percentage of randomized positions of original datapoint residuals (but a different selection of points in each iteration) and substitute the rest of the points with normal Gaussian noise points.

The collection of all simulated data fits will then generate a distribution of values for each parameter which can be plotted in a certain number of bins to simulate a confidence value distribution for each parameter. This distribution should have statistical properties of a Gaussian normal distribution and can be analyzed with respect to confidence levels and so forth. For more details on the more common statistical tests that can be performed on these distributions, please click here.

Data Simulation Methods:

Just as there are many ways in which one could obtain different variances in the original data, there are different methods and rules that can be applied to re-generate a new error distribution that most closely resembles the error in the original data. In cases where there is an uneven distribution of errors, for example, because of instrumental limits, as are present in absorbance optical systems, the noise synthesized for the Monte Carlo iteration should ideally reflect this property. Several approaches are feasable and have been implemented in the UltraScan software.

The first method is to use the variance of the original fit and apply this value as the standard deviation of the randomly added Gaussian noise to the best fit data. This method is most appropriate for cases where little change in the noise across the data is present.

A second method utilizes the the standard deviation of the original datapoint to the best fit and replaces a point with a randomly generated noise level that has a Gaussian distribution with a standard deviation as the deviation of the original datapoint that is going to be replaced. This method has the advantage that the new noise very closely matches the original noise distribution. However, points from the original dataset that happen to lie very close to the mean or are identical to the mean of the best fit may be "locked in" and never change for any Monte Carlo iteration. This may artificially constrict the fits to a subset of all possible solutions.

To alleviate the problem mentioned above, two other possible synthetic noise generation schemes can be employed: One option is to replace points with a variance below the overall variance of the best fit with a noise level of the variance of the best fit of the run. This way the variance may be on the large side, but one can be reasonably assured that the final parameter distribution is not artifically narrow.

Another option is to average out residuals of the best fit from neighboring points, which would provide a reasonable compromise in variance that matches local conditions and is still able to adequately adjust for varying levels of noise within a dataset.


www contact: Borries Demeler

This document is part of the UltraScan Software Documentation distribution.
Copyright © notice

The latest version of this document can always be found at:

    http://www.ultrascan.uthscsa.edu

Last modified on January 12, 2003.