Typically you have a collection of values of some variable. If the variable is human height, these values might be expressed in inches or centimetres; if it’s human weight, in pounds or kilograms. The standard deviation is a specific number of these units; roughly speaking, it’s the size of a typical deviation from the mean of these measurements. To compare two different collections of measurements, it’s generally very desirable to express them in units that make these typical deviations the same size. We might call such units standard units: standard units are units chosen so that the mean (average) of the measurements is 00, and a typical deviation −− technically, the standard deviation −− has size 11. The zz-scores are just the original measurements expressed in these standard units instead of the original units of measurement.
The conversion is actually quite similar to the conversion between two more familiar units of measurement. If you measure a length in inches (or centimetres) and then convert that measurement to feet (or metres), you have to multiply by 112112 (or 11001100), because each inch is 112112 of a foot (each centimetre is 11001100 of a metre). Suppose that you’re measuring human heights, and you get a standard deviation of 33 inches. Then for this sample 11 standard unit is 33 inches, and each inch is 1/31/3 of a standard unit. Someone whose height is 4.54.5 inches above the mean is 4.5⋅13=1.54.5⋅13=1.5 standard units above the average. This deviation of 1.51.5 standard units is the zz-score corresponding to that height of 4.54.5 inches above average.
You might say that the standard deviation is a yardstick, and a zz-score is a measurement expressed in terms of that yardstick.
The situation is a little different from simple conversion between inches and feet, though. It’s more like conversion between Fahrenheit and Celsius temperatures: not only does the size of the unit change, but also the 00 point. Just as 0∘0∘ C is 32∘32∘ F, not 0∘0∘ F, the 00 point for zz-scores is generally not the same as the 00 point for the actual measurements. A sample of adult American males, for instance, might have an average height of 7070 inches, so that someone 4.54.5 inches above the average would be 74.574.5 inches tall. The 00 point for zz-scores, measured in standard units, is always right at the average, so in this case a height of 7070 inches would correspond to a zz-score of 00. A height of 74.574.5 inches, being 4.54.5 inches and therefore (as we saw before) 1.51.5 standard units above average, would correspond to a zz-score of 0+1.5=1.50+1.5=1.5. And so on.
When you calculate
to get a zz-score zz from a measurement xx, you’re doing the same kind of unit conversion that you do in converting a temperature from one scale to the other. The calculation x−x¯x−x¯ gives you the deviation of your actual measurement from the mean; in my example, that’s 74.5−70=4.574.5−70=4.5 inches. When you divide by ss, the standard deviation, you’re changing ‘yardsticks’ from inches to standard units. In the example s=3s=3 inches, so you’re multiplying the deviation of 4.54.5 inches by the conversion (scaling) factor of 1313 of a standard unit per inch.
One point of these standard units is that the permit comparison of distributions. For example, women are on average shorter than men, and their heights vary a bit less. Thus, a woman who is 33 inches above the female average is, relative to the female population, taller than a man who is 33 inches above the male average. But how much taller? Use of standard units makes it possible to answer that kind of question. I’m using data that are now a bit out of date, but very roughly she is 1.21.2 standard units above the female average, while he is only 11 standard unit above the male average.