Why How to Measure Anything Hubbard Still Breaks Managers’ Brains

Why How to Measure Anything Hubbard Still Breaks Managers’ Brains

Most business decisions are based on a coin flip disguised as a spreadsheet. You've seen it. A room full of smart people staring at a "Risk Matrix" where "High" multiplied by "Yellow" somehow equals a budget approval. It’s nonsense.

Douglas Hubbard hated this. He still does.

When people search for how to measure anything Hubbard, they’re usually looking for a magic formula. They want a ruler that measures "customer satisfaction" or "brand equity" as easily as a gallon of milk. But Hubbard’s Applied Information Economics (AIE) isn't a ruler. It’s a way of thinking that assumes you know more than you think you do, but less than you hope.

It’s about reducing uncertainty. That's it.

Measurement isn’t about being "right." It’s about being less wrong than you were five minutes ago. If you can move the needle from "I have no idea" to "I’m 70% sure it’s between X and Y," you’ve already won.

The Concept of Measurement (It’s Not What You Think)

Hubbard defines measurement as "a quantitatively expressed reduction of uncertainty based on one or more observations."

Read that again.

Notice he didn't say "finding the exact number." In the corporate world, we’ve been trained to think that if we can't get a precise decimal point, it’s "intangible." Hubbard calls BS on that. If it matters at all, it’s detectable. If it’s detectable, it can be measured.

Think about it. If "employee morale" is real, it must have an impact. If it has an impact, that impact is observable. If you can observe it, you can count it.

👉 See also: Federation Bank Share Price: What Most People Get Wrong

The problem is that most people are terrified of being wrong. So, they do nothing. Or they use "expert intuition," which is usually just a fancy word for a guess influenced by what the person ate for lunch. Hubbard’s whole philosophy is built on the idea that a little bit of data—even very noisy, messy data—is infinitely better than no data at all.

Why You’re Better at Guessing Than You Think

Hubbard is famous for the "Calibration" exercise. It’s a bit of a trip.

Most people are overconfident. If I ask you if you're 90% sure the population of Tokyo is over 30 million, you'll say yes. Then you'll be wrong. You weren't actually 90% sure; you were just guessing with confidence.

In How to Measure Anything, Hubbard shows that humans can be trained to be "calibrated" estimators. This means when you say you’re 90% sure of something, you actually get it right 90% of the time. Once a team is calibrated, you can use their collective "guesses" as valid data points for a Monte Carlo simulation.

It sounds like voodoo. It’s actually math.

The Rule of Five: Why You Don't Need Much Data

This is the part that usually makes data scientists twitch. Hubbard talks about the Rule of Five.

If you have a massive, unknown population and you pick five samples at random, there is a 93.75% chance that the median of the entire population falls between the smallest and largest values in your sample of five.

Only five.

You don't need a survey of 10,000 people to get a "good enough" sense of what's happening. If you’re trying to figure out how long a new software implementation takes and you ask five project managers who’ve done it before, their range is already a massive reduction in uncertainty compared to "we don't know."

We often over-complicate measurement. We think we need "Big Data" when "Small Data" would solve the problem.

Hubbard’s work highlights that the "value of information" is highest when you know the least. The first few bits of data you collect tell you more than the next thousand bits ever will.

The Myth of the Intangible

"How do you measure the value of a brand?"

"How do you measure the risk of a cyberattack?"

"How do you measure the benefit of a happy workplace?"

People call these things "intangibles" because they want to avoid the accountability of a hard number. Hubbard argues that if you can’t measure it, you don't understand the problem. And if you don't understand the problem, you shouldn't be spending money on it.

Take "Information Security." Most companies buy firewalls because they feel like they should. But what’s the probability of a breach? What’s the average cost of a record lost? Hubbard’s method forces you to break these "intangibles" down into their component parts:

  1. What is the event?
  2. How often does it happen (Frequency)?
  3. What is the impact when it happens (Loss)?

Suddenly, "Risk" isn't a red bubble on a chart. It’s a curve showing a 5% chance of losing $10 million this year. That is something a CFO can actually use.

📖 Related: Is Hobby Lobby Closing? The Truth Behind Those Persistent Store Closure Rumors

The Value of Information (VoI)

One of the most practical parts of how to measure anything Hubbard teaches is the Expected Value of Information.

Most companies measure the wrong things. They measure what is easy to measure, not what is important. Hubbard uses a formula to calculate how much a piece of information is actually worth.

If a measurement isn't going to change your decision, it's worth zero.

Literally zero.

Why spend $50,000 on a market study if you're going to launch the product anyway? Hubbard’s approach identifies the "Information Gap"—the variables that have the most uncertainty and the biggest impact on the outcome. You spend your measurement budget there and nowhere else.

Applying the Hubbard Method in the Real World

Suppose you're looking at a new AI tool for your team. It costs $200,000.

Instead of arguing in a conference room about whether AI is "the future," you start with a basic model.

  • Step 1: Define the Decision. Are we buying this tool or not?
  • Step 2: Model the Current State. How much time do we spend on these tasks now? Give me a range. "10 to 20 hours a week."
  • Step 3: Model the Impact. How much will the tool reduce that? "Between 10% and 50%."
  • Step 4: Run the Math. Use a Monte Carlo simulation (basically running thousands of "what if" scenarios) to see the range of possible ROIs.

If 95% of the scenarios show you making money, buy the tool. If the scenarios are all over the place, find the variable that’s causing the chaos. Is it the adoption rate? Great. Now you go measure that specifically.

It turns business from a game of "who has the loudest opinion" into a legitimate science.

The Critics and the Reality

Not everyone loves Hubbard. Some statisticians think his reliance on subjective "calibrated estimates" is too soft. They want "hard" data.

But Hubbard’s retort is usually: "What's your alternative?"

If the alternative is a gut feeling or a flawed "weighted score" system, Hubbard’s math wins every time. He’s not saying calibrated estimates are perfect. He’s saying they are measurably better than the methods currently used by almost every Fortune 500 company.

The reality is that we live in a world of "Unknown Unknowns." You can't eliminate risk. You can't predict the future with 100% certainty.

But you can be less surprised.

Actionable Steps for Measuring Your "Unmeasurable" Problem

If you’re sitting on a project and someone says, "We can't measure the benefit of this," here is how you use the Hubbard approach to prove them wrong.

Identify the Decision
Stop asking "How do we measure this?" and start asking "What decision will this measurement support?" If there’s no decision, stop. You’re wasting time. If there is a decision (e.g., "Should we cancel this project?"), then the measurement has a purpose.

👉 See also: How Much Is Gold Today Per Gram: What Most People Get Wrong

Decompose the Problem
If "Customer Loyalty" feels too big, break it down. Is it "Probability of renewal"? Is it "Number of referrals per year"? Is it "Price premium we can charge compared to competitors"? Smaller things are easier to count than big, vague concepts.

Check for the Rule of Five
Before you hire a consultant, just grab five data points. Look at five past projects. Talk to five customers. See if that range already narrows your uncertainty enough to make the decision. Often, it does.

Calibrate Your Team
Don't just ask for a "best guess." Ask for a range where they are 90% confident. If they say "Between 10 and 100," ask them why it couldn't be 5. Ask why it couldn't be 200. This "internal interrogation" helps move people away from overconfidence and toward a realistic range.

Use a Simple Monte Carlo Tool
You don't need a PhD in statistics. There are basic Excel add-ins and web tools (like Hubbard's own HDR Tool) that let you input your ranges and see the probability of various outcomes. It takes the "fear" out of the numbers because you see the whole spectrum of what might happen.

Measurement isn't a hurdle. It’s the light in a dark room. You might still trip over the furniture, but at least you’ll know why. Hubbard’s method is fundamentally about intellectual honesty—admitting what we don't know and being systematic about figuring it out.

Start by finding one "intangible" in your current workflow. Ask yourself: "If this thing improved, how exactly would I see that in the physical world?" Once you have the answer to that, you’ve already started measuring.