Many of the people we talk with about measuring the impact of enterprise learning get that “deer in the headlights” look. To be fair, some people in the discussion may have had that faraway look because they were planning their measurement campaigns, but we suspect many didn’t.
Discomfort in measuring the impact of learning comes from misunderstandings and myths of measurement. In How to Measure Anything, Douglas W. Hubbard explains three misconceptions:
- Concept of measurement. We don’t understand what “measurement” means.
- Object of measurement. We haven’t defined what we are measuring. Ambiguous language gets in the way.
- Methods of measurement. Measurement methods are a mystery to us.[1]
Once we understand those concepts, we realize that measurements are always approximate and easier to do if we define them well. We can also know that much of what we thought immeasurable can be measured and may already have been measured.[2]
What is Measurement?
When you ask people to define measurement, most will respond with a concept of precision or an exact calculation. But in the practical world of science, mathematics, and business, measurements are approximations. If science depended on exact precision, we would not have been able to land men on the moon. If business leaders needed exact precision, they would never make a decision.
We can become lulled into a perception of pinpoint accuracy when we can count attendees at a training class and collect a smiley sheet from each person. But suppose we survey 300 managers to see if a learning intervention improved performance but only 120 respond. You don’t have a response from every manager, so how could your measurement be accurate?
What is important is not accuracy, but the reduction of uncertainty.
How confident are you that a particular response, chosen by 60% (72) of the respondents, reflects the opinions of the entire population? If it is a yes/no question, your confidence might be that it is a 50/50 tossup.
Using a confidence interval calculation, we can be 80% confident in the response with a confidence interval of 5.73%. That means we have a confidence between 74.27% and 85.73% that 180 of the respondents would choose the same answer. How does that match up to your guess?
Would you be more certain or less certain you want to continue that portion of the program? Would the estimate help you decide?
What Do You Need to Measure?
If you haven’t figured out what you want to measure, it will seem impossible. Let’s suppose we are planning an intervention to improve development coaching. You might want to think about monitoring your development plans to see what activity takes place, but does that tell you what you need to know about quality or results?
Begin with the end in mind. Ask what the outcomes should be and why it matters. For example, a well-designed survey of the people being coached might tell you what impact the coaching is having on them, or it could be in your pulse surveys. And you don’t need to interview everyone to know if your program is working.
Measurement Methods
When we talk about measurement, you might think we are about to discuss how you gather all the data you can and analyze it using a sophisticated analytics program. There might be times you want to do that, but most measurements you need do not require that level of time and expense.
Direct measurements are useful when they are easy to get and when they answer your burning questions. However, much of what we want to know can only be learned through inference and indirect deductions.
Unlike the exercises in our college statistics classes, we are not trying to get a “statistically significant” outcome, we seek to increase the certainty in making decisions. We can make inferences about data that doesn’t meet the high standard of statistical perfection.
For instance, a tiny sample can tell us a lot about a large population. Observation and reason can inform us about rare events that don’t give us a lot of data. We can measure the value of a benefits program by how much people are willing to pay for it and their subjective feelings about it.
One of the most instructive examples of how small samples can inform decisions is based on the Rule of Five. If you want to know whether you should consider more telecommuting, you will want to know commute times for employees. Measuring commute times for 10,000 employees would be time-consuming and expensive. We can ask five people at random to get a good idea of what the commute times are for the entire population.[3]
Here’s how it works: the median of any group of values is the middle value. There is a 50% chance that any value is either above the median or below it. It’s like flipping a coin five times. The chance of getting five heads in a row is 1 in 32 or 3.125%. The same applies to getting five tails in a row. The chance of getting all heads or all tails is 100% - 3.125% X 2 = 93.75%.
So, the rule of five cannot give you the exact median, but it can give you the range of the median, which is much better than not knowing.
You Are Closer Than You Think
With a little bit of study, a good data analyst, and some help with asking the right questions, you can get much more value out of measurement than smiley sheets and attendance stats. You will find you don’t need to measure everything, that you have more information than you know, and don’t need as much information as you thought you did.
And here’s the bonus: think about cool it will be when talk to your CFO and CEO about funding your new programs when you can cite the probability of success within the confidence level.
References:
1. Hubbard, Douglas W. How to Measure Anything: Finding the value of “INTANGIBLES” in Business, 3rd ed. P, 29. John Wiley & Sons, Hoboken, New Jersey, 2010.
2. Hubbard, p. 29.
3. Hubbard, p. 43.
PhenomeCloud is a comprehensive technology solutions provider committed to empowering businesses to overcome challenges, enhance their workforce capabilities, and achieve superior outcomes.
Leave a Comment