HR 101-Statistics

It’s Seiden folks. ‘Nuff said.

If you were teaching a ‘Finance Basics for HR’ course, what type of information would you share? Statistics: the most important class most people blow off. Right out of the gate, we need to establish two things: first, math is not hard. Your whole, “I’m not good at math” thing is an illusion.

Second, even with stats, you cannot predict the future. The language of statistics is, “this is likely to happen,” or “this group will probably like it.” That can be frustrating in a world where people want you to answer questions with clipped absolutes such as “Definitely” and “For sure.” Once you accept that you actually use math every day and are good at it, and make peace with a world where things are “likely” and not “certain,” you’re halfway home. To get the rest of the way, dive into these topics: how to get accurate information about an organization (or event) without having to survey everyone; how to create feedback forms that mean something; how to measure training’s impact on the bottom line using actual performance data… and while we’re at it, how to measure the impact of hiring practices, leadership styles, pay rates, and reward systems, too. That’s probably a good start. What’s critical to know? There are a few statistics concepts that are incredibly, incredibly useful. Here are four worth knowing right now:

  1. Regression to the mean. Simply put: whatever’s happening today, the trend will eventually stall and head the opposite direction… all by itself. You can nudge average performance up (or down), but you can’t suddenly make everyone superstars. (If you do, expect the crash.) An important implication of this is that when employees are split into “high potential” and “needs development” cohorts, and then provided training specific to those groups, you are likely to be disappointed in the “high potential” training and overly pleased with the “needs development” training, because of a natural tendency for these groups’ performance to gravitate toward average. This change will happen independent of the training you provide, and you need to ask yourself, “Did my training program provide additional benefit beyond what would have happened normally?”
  2. Margin of error. A measure of how “real” is your information. What’s the difference between a “4.2” and a “4.8” on a 360° survey? Knowing the survey’s margin of error will tell you if there is a difference at all… or if statistically, those numbers are the same. Thinking that differences are meaningful when they are not is a common mistake, and one I see frequently in HR in the analysis of 360° data.
  3. Likert Scales. Measurement scales are the way in which information is put into a format that allows for apples-to-apples comparisons. HR in particular seems to love its Likert scales, so let’s understand just a few things here: (1) the way you measure things impacts the results. One of the key problems with Likert scales is that people tend not to answer each question in a survey independently, but rather they look at the survey as a single whole, resulting in responses that cluster around one end of the scale or another. (2) Another consideration with Likert scales is that you’ve got to allow people to say “I don’t know.” A scale that does not provide a “does not apply” or “neither” option forces people into a response that they don’t truly believe… and that gives bad data. For instance, if you asked me, “I enjoy frogs’ legs: Strongly agree / Agree / Disagree / Strongly Disagree,” without some sort of “n/a” option, I’d probably tell you that I don’t like them. Which is a lie. I’ve never had them. Now if you were an HR person thinking of adding a French dining option to the company’s reward program, you might think it’s a bad idea based on my response… when in reality, you’d be nixing an option for no good reason at all. (3) A third issue with Likert scales is variable “interrater reliability,” which is a fancy way of saying that not everyone fills them out the same way. Some people are just tougher graders. This doesn’t impact the objective quality of the work being measured, but it can cause problems—like when I was asked to coach a manager who had consistently rated her top performers more harshly than her peers had rated them on a 360° survey. HR thought she was being too hard and needed help managing her expectations. I found that she was quite adept and was tough but fair. I turned down the coaching assignment and suggested to HR that instead, they train her peer group to follow her lead. HR refused, and a year later, when her group was the only one to hit all their performance targets, I had a nice chuckle.
  4. The addition rule. Two things are less likely to happen than one thing. “He was valedictorian and he is smart” is a less likely statement than “He is smart.” It’s also less likely than “He was valedictorian,” even though your brain will have a tough time believing it—it’s a quirk of human nature to draw conclusions, even if the conclusion is statistically unlikely. For instance: “The resume says she went to Harvard, smart people do great here!” As much as you want to believe that “going to Harvard” is synonymous with “smart,” you really can’t add information like that. Maybe she has a photographic memory, or just really loves to study, or is smart about the specific subjects she studied only, or is really rich, or is a great cheater… HR often gets hiring decisions wrong because of the desire to draw conclusions that violate the addition rule.
  5. Correlation vs. Causation. Sometimes, two separate trends move together—they are related. That’s correlation. Causation, on the other hand, means that one trends moves are directly responsibly for moves in the other trend. These are different ideas! This is is such an important distinction, “Correlation does not imply causation” even has it’s own entry on Wikipedia. In high school, the grim example we used to highlight this fact was: drowning deaths tend to be correlated with ice cream sales. Does this mean that ice cream & water is a dangerous mix? No, of course not; it just means that these things both happen more in summer. At work, survey often bake causation into the questions themselves, such as with the Kirkpatrick Level I forms we often use to rate training sessions. Those forms measure learner satisfaction—which we assume is correlated with, and caused by, effective training. Not necessarily so. Think about asking children to rate their dinners: vegetable-laden meals would certainly lag fast food in the ratings. Would we then nix veggies for burgers and fries? No! We know the veggies is better for them, so we tell them to be quiet and finish their suppers, period.) Be very, very careful about the questions you ask, and the relationship you assume between the questions you ask (Did you like the program?) and what you really want to know (Should we continue this program?)

How does this make someone a better HR person? Better ability to measure the impact of “people” on the bottom line. Better hiring models. Better team structures. Better promotion practices. More meaningful data for fellow executives. Less grunt work. Better ability to design and roll out impactful development programs. Simpler, more effective use of communications technology. Cheaper solutions to people problems. (Stop me anytime…)

Link to original post

Avatar

Uncategorized

Leave a Reply