Sometime, it’s necessary to determine properties from combined simulations under
different conditions. The WHAM method allows to unbias the distribution of
the combined simulations.
Here, we present the result on a 1D toy model sampled with a Markov Chain Monte
Carlo (MCMC)
Here the import in the ipython notebook:
The 1D potential is constructed with a 1D polynomial fit of some points
This potential is then sampled using a MCMC
We used six different temperature (\(T=\beta^{-1}\)):
\(\beta=1.00\)
\(\beta=0.85\)
\(\beta=0.70\)
\(\beta=0.55\)
\(\beta=0.40\)
\(\beta=0.25\)
And then we plot the distributions obtained from the 6 MCMC:
We can look at the distribution of the six MCMC simulations merged together:
This distribution is biased as we run the MCMC at 6 different temperatures. The
aim of WHAM is to unbias the distribution. We assume that , the biased
probability of bin j in the i-th simulation, is related to , the unbiased
probability of bin j via:
where is the biasing factor and is a normalizing constant
chosen such that :
For the temperature biasing:
We assume that we want to compute the potential of mean force (PMF) at
\(\beta=1\).
We can compute \(c_{ij}\):
Just for fun, the visualisation of c:
An optimal estimate of \(p_{j}^{o}\) is given by:
with the number of bin count for simulation i and bin j and the
total number of sample generated by the i-th simulation.
and, as said before:
S is the total number of simulations (here \(S=6\)) and M is the total number of
bins (here \(M=100\)).
We can easily compute \(n_{ij}\;\forall\;(i,j)\)
A representation of n:
Now we use fsolve to solve the WHAM equations described below:
And here the result; the unbias distribution for \(\beta=1\)
We can then derived the free energy profile (F) or Potential of Mean force
(PMF):
and compare to the initial one, used in the toy model:
If we normalized with the \(F_{0}\) factor we obtain:
If you want to ask me a question or leave me a message add @bougui505 in your comment.