In Statistics it seems like standard deviation is something that comes up often. In Statistics standard deviation is a way to go abound measuring the variation or dispersion of a collection of values when it comes to some data. For example take the set of numbers [50,51], and compare them to [49, 89]. It would go without saying that the numbers [50,51] are closer together than [49, 89], but how should one go about measuring that? Standard deviation is one way to go about doing just this, but there are of course many ways of going about doing so, and even when it comes to Standard deviation it would seem that it is not so Standard actually as it would appear that there is more than one standard for Standard deviation actually.
In order to find out what the standard deviation of a data set is, I first must know the arithmetic mean of the data set. Arithmetic mean is just a more formal way of referring to an average, when you really get into statistics you will find that there is more than one mean, so it is important to know which mean we are talking about.
So yes I just need some kind of method to add up all the numbers in a data set, and then divide by the number of numbers in the set to get a mean.
Once I have my mean method I can use that to create a standard deviation method. The process involves adding up a sum, the values of which are the square of the difference when subtracting a number in the set by the mean. Once I have this sum I just need to divide the sum by the number of numbers in the set minus one, and get the square root of that result.
Did you get all that? Well in ether case this seems to work okay for me:
This is just one of several kinds of standard deviation though, in the python statistics standard library this seems to be referred to as the sample standard deviation which is just the square root of the sample variance.