Remember that ad? “Four out of five dentists recommend” this type of gum for “their patients that chew gum”.

I’ve heard more than one comedy routine about this very advertisement. Many comedians focus on the “fifth dentist”. What the heck was HIS problem? Was he a dental outcast?? Did he not receive the small, unmarked bills in the envelope stuffed behind the tank of nitrous oxide???

While funny – these are the wrong questions. What you should be wondering is “why ONLY five dentists, and what’s the story for the non-gum chewers? Are they forever doomed to having bad teeth??”

I use statistics at my job to resolve or understand problems. For example, problem x is showing up at process y. I need to understand (a) how often x is happening, (b) why is process y the suspected problem, (c) if process y is changed in a certain way z1,z2… etc., what happens to the rate of failure seen by problem x?

Here’s the catch – if I don’t understand process y, my decision making process to try a particular change z1 in (c) doesn’t happen. However, if I have a gut feeling about what the change in (c) is and wish to appear pro-active, wouldn’t that affect my framing of the decision making process in (a) and (b)? After all, I’m the engineer! I’m the decider!!

People do this all the time. It’s called “putting the cart before the horse”.


And sometimes, hey – the cart’s going downhill, so it gains speed. You get lucky, the horse follows you on the way down, and everybody wins.

And other times, your cart is positioned going uphill, and you run over the horse as your cart goes in the opposite direction you wanted to go.

Want to know what really bugs me?

When folks use statistics as a tool to convince people of the righteousness of their gut rather than using the opportunity to understand or frame the problem at hand.

Why? Unfortunately, many people learn the wrong lesson about numbers. Math is taught at the philosophical level as TRUTH to many. And statistics is using math to GUESS. But some folks can’t discern the difference between statistics and math, so therefore, if they hear a NUMBER, IT MUST BE TRUE.

This bugs me. And it probably bugs you as well, whether you are a conservative or a liberal, Democrat or Republican. (Cue the cries of “foul!”)

So I’ve collected some ways of catching these examples of statistical injustice:

• Beware of “Always, Never” statements – because that’s pretty strong language – always or never is 0% or 100%. And anything that the majority agrees with (50.00000000000001%) isn’t necessarily a “mandate” either. It’s half + one. Majority may rule in a democracy, but it isn’t a great metric toward declaring success.

• If a statistic shows an action is worthwhile, a similar sounding statistic can be used to argue its ineffectiveness. If you are 80% successful at something, that sounds pretty darn good, right? That also means that you’re incorrect 20% of the time. Application of said statistics is important. For example, if I say “our data shows that 80% of these pace-makers work” (NOTE: I don’t work with pacemakers), that’s a LOT of wrongful death lawsuits.

So in this case, the true measure of success would be the reduction of the 20%
failure rate, not the hyping of the 80% success rate. Analyzing small failures gets
into the subject of process capability analysis (which I’ll tackle later here, but for
now, follow the wiki link).

Fixing the smaller problems gets you that much closer to the “always or never”, but just a warning to the absolutists – you’ll never get there, but don’t get discouraged. Striving for “really close to always or never” is always appreciated. 🙂

Sample Size vs. Population arguments – the sample size should always be indicative of the population. If I say “80% of the people in my survey like chocolate ice-cream, and 20% like vanilla” and I was also ordering for a party of 500 people, I might have a lot of extra ice cream or not enough if I told you that my survey was five people. The choice of “how many people is enough” is answered here.

• Beware of any “independent survey” in which the “independent” isn’t named in the fine print

If someone mentions the term “independent survey” and doesn’t mention which organization carried it out, about the ONLY thing you can be relatively sure of is that the person who cited the statistics didn’t do the survey themselves.

However, they might’ve asked their family. Or neighbors. Or similarly “independent” thinkers. This could be a good thing or a bad thing – but without any other data to the contrary, it certainly points at a potential for statistical bias.

• Beware of any “survey” that doesn’t mention they are “independent”

If it isn’t independent, chances are they did the research themselves. Converse to the above, this may not be a bad thing – if they can show the sample size for the suggested population, as well as the other metrics described below…

Margin of Error – with any measurement, there’s the chance of error. How the margin of error is determined varies a great deal based on what you’re measuring, but it’s a safe bet that any solid, neutral, honest attempt to understand a problem with data will possess this metric.

Here’s a good example why:

Data Miner #1 reports the following survey:

1000 people surveyed, what’s better – chocolate or vanilla ice-cream?
50.1 % chocolate
49.9 % vanilla
3.5% MoE (“Margin of Error”. I use the MoE term because it’s often-used shorthand.)

Now with a close chocolate-versus-vanilla race like that, it’s really important to pay attention to the margin of error, because if you didn’t, you’d be sure that the majority wanted chocolate.

Here’s another sample of the same data, measured by Data Miner #2:

1000 people surveyed
50.5% vanilla
49.5% chocolate
3.5% MoE

Using Data Miner #1 and #2’s data TOGETHER, however…

2000 people surveyed
50.2% chocolate
49.6% vanilla
2.1% MoE

Also note that the MoE isn’t 1.75% (half of 3.5%) nor do the numbers add up to 100%. We don’t live in a perfect world and neither do our statistics. Perhaps 1 or 2 survey-ees had unreadable/illegible handwriting. This stuff happens.

• Demographics and Correlation —
Yet another ice cream survey example:

Chocolate or vanilla?
10000 people surveyed
51.2% like chocolate
48.7% like vanilla
MoE 0.01%

Cut and dried now, right? MoE is small, good sample size.
Chocolate wins, film at 11.

But WAIT! What if I produced the following independent survey of…

10000 people surveyed
49.7% like vanilla
50.1% like chocolate
MoE 0.05%
Survey conducted in Alameda, CA

VANILLA?
Nope, not vanilla either.

In this case, remember the context:

Are you a Californian? No?

Then you would probably think “who cares what Alamedans want for ice-cream?”

And you’d be justified in thinking so.

The first survey didn’t mention its origin. So it’s fairly useless when applying it to a particular case.

Are you an Alamedan that likes vanilla?

(I pity you. And MY pity for YOUR pathetic vanilla ice-cream needs is a good example of statistical bias. I have it, you have it, we all have it. More on that subject later.)

You sad vanilla ice-cream lovers ready for the TRUTH?

Against my better gut instinct, in the interests of impartiality toward solving the age-old ice-cream question once and for all – I’m forced to tell you that you’re still in luck, for vanilla still has a trick or two up its sleeve.

10000 people surveyed
50.1% like vanilla
49.7% like chocolate
MoE 0.03%
Survey conducted in Alameda, CA

Demographics
10% under 18
15% 18-25
20% 26-35
5% 36-44
15% 45-55
33% over 55
2% did not state

What’s wrong here?
On first guess, it’s those Alamedan seniors and their vanilla loving ways!

You fiends!!

But even as I’ve presented the data, it could be that the youngsters REALLY like
vanilla and the seniors, their chocolate. You still can’t tell. You can GUESS
based on the uneven demographic histogram, but there isn’t a direct line from one
table to the other – in short, you STILL don’t conclusively know. The line from
the demographics to the survey data is called correlation.

Other questions on demographics and correlation to ask of the above example:

Were the folks on the survey actual Alamedans, or were they just visiting?
(And if they were visiting, does that make their opinion valid or invalid?)

Splitting up EACH demographic, what was their particular choice in ice cream?
(Answering this will establish a correlation between the two tables.)

Where was the study in Alameda done? Bayport? Park Street? South Shore??
(Maybe there’s something in the ground water at Bayport that makes folks like
chocolate ice-cream more.)

Who the heck buys the ice cream? Young kids with allowances or old folks
buying for themselves and the grandkids?
(If I’m working for Tuckers Ice Cream out in Park Street, Alameda, CA – I’m
going to target the folks that live near Park Street and the folks with the cash.)

… and so on…

Now here’s the real catch.

WHEN PEOPLE DO ANY OR ALL OF THESE THINGS — WHILE THEY MAY SEEM CONTRADICTORY — THESE PEOPLE AREN’T LYING. Just because something is not necessarily TRUE does not imply it is completely FALSE either.

Lying is contradicting a FACT. A more appropriate term would be “hedging”.

Repeat after me: Statistics aren’t facts. They are a GUESS. And whenever we hear statistics, we should always ask – how accurate of a guess is it to the problem at hand? Does it frame the problem correctly?

And if it’s a good guess, what action should be taken to solve the problem?

Lastly, remember this – statistics that argue for not doing something can be useful, but they are limited – while it may be a good metric to measure the decision for not doing THAT SPECIFIC THING, it isn’t a valid statistical argument to do NOTHING.

Advertisements