The running mean script is useful for similar purposes as the numerical differentiation script, and is about as basic. It is used to help with
taking data that fits a Gaussian (normal) distribution. Essentially, as data is taken, it is logged in the script, and the mean is taken of the
current dataset. Due to the nature of a normal distribution, this will tend to a specific value as more data is taken, which will become more
obvious. This is typically the value desired by the user taking readings.
The point of the script is to find an efficient place to stop taking readings. For example, 100 datapoints to find a mean is unnecessary if
it can be found in 30; this script gives the user a constant visual to help them understand when they know enough trials of data have been taken.
Furthermore, the script plots a running random uncertainty on the mean - which the user usually wants to reduce with more trials of data. Thus it also
becomes visible, using this script, when more trials are unnecessary as they do not provide much reduction in experimental uncertainty.
The script is thus essentially used as a test to find the appropriate number of trials to use when recording experimental data that fits a normal
distribution. This means it has many applications in physics - for example I have used it when finding mean square voltages while examining temperature
dependent noise in a resistor.
The structure of the code is simple - it is a Jupyter Notebook, wherein the first cell is used to import the NumPy and matplotlib modules, along with defining empty arrays for the data. The next cell is used to define the running mean function. The function doesn't actually return anything, it simply appends the arrays appropriately and calculates a new mean to plot. Thus the third cell is re-run for each new datapoint, which is input into the “runningmean” function, with the relevant arrays then plotted. The user doesn't need to define what data is being examined for its mean, nor its units - they simply input the raw values and the graph will update, with a new mean and random uncertainty.
Note: the random uncertainty is taken as the standard deviation divided by the square root of the number of datapoints, which is accepted in scientific experiments; see Hughes, I.G. & Hase, T.P.A. (2010). Measurements and their Uncertainties: A Practical Guide to Modern Error Analysis.