I did not find your version of that algorithm in Wikipedia, can you give me a link?
http://en.wikipedia.org/wiki/Algorithms ... g_variance
why we need the "variance for the mean"
The mean of a sample is an unbiased estimator for the mean (also called
expectation value) of a probability distribution. This is basically
E((X1+...+Xn)/n)=(E(X1)+...+E(Xn))/n=(n*E(X1))/n=E(X1)
To know how good an estimator it is we have to compute its variance.
To this end we use the following rules:
Var(Z/n)=Var(Z)/n^2
and if U V are independent
Var(U+V)=Var(U)+Var(V)
With these rules we can compute the variance of the mean of a sample
Var((X1+...+Xn)/n)=Var(X1+....+Xn)/n^2=n*Var(X1)/n^2=Var(X1)/n.
EDIT:
To clear up some possible confusion. In prior discussions the term "sample"
has sometimes been used for a single measurement. This usage
is correct in digital signal processing but not in statistics where
a sample means a set of measurements. For this reason Peter refers
to a single run of his algorithm as a "cycle".
EDIT2:
Another possible confusion is the formula
Var((X1+...+Xn)/n)=Var(X1)/n.
We don't know Var(X1) but we may estimate it by taking the variance
of the same sample. This involves a division by n-1. So in total we
divide by n*(n-1).