Everybody and their dog want to be data-driven. There are many ways data can augment our decision-making and drive incredible results.

We are witnessing an unbelievable transformation in *every* sector, driven by advances in AI and analytics. Who doesn’t want in on the magic?

Yet, for most of us, building these systems in-house is still out of reach because most organizations have not built the muscle or acquired the resources to drive such initiatives.

So, where *can* we begin our data-driven journey? Do we need analysts to give us golden insights? Do we need data scientists to build predictive models?

I’d like to propose that there’s a better place to get started, and it’s right under our noses. We can use this tool to increase the likelihood of choosing the right path every time you’re at a crossroads.

I bet you are already using this tool to drive many of your daily decisions.

This tool is highly under-rated and often misunderstood, and you may be leaving money on the table by not using it to its full potential.

I’m talking about **measurement**.

So… 🧐 What’s with all the theatre? How can I possibly be this excited about measurement?

In this 3 part series, I will dissect how you can wield measurement as a weapon for fighting uncertainty.

First up — Let’s define what we’re even talking about!

## The Concept of Measurement

Why do we measure things in the first place?

I’ve heard all kinds of answers:

- To manage the work being done
- To provide transparency
- To answer stakeholder questions

There is one common thread here.

**Principle #**1

** **The act of **measurement reduces uncertainty, **which in turn helps us **make better decisions.**

This bears repeating. Measurement is not about “capturing” a single value. Even though your measurements will typically be a single number, that number is coupled with uncertainty about the underlying real-world value that is often ignored.

Let’s see the implications of this distinction.

Imagine that you can invest **$1M** in my new project. You want to know if you’re making a good investment.

**Scenario #1 — Maximum Uncertainty**

**Me: ***“After some measurement, I’m fairly certain that you will net – $0.5M to $1.5M over the life of the project.”*

In this case, **losing up to $1.5M **is just as likely as **gaining up to $1.5M.**

Is it rational to invest?

Of course not. **There is simply not enough certainty about the outcome.**

You tell me to bugger off and come back with a different offer. So I do:

**Scenario #2 — Reducing Uncertainty**

**Me: ***“If you pay me $50K, I can do more experiments and reduce that range by $1M”*

Did I tell you the exact value I was going to pay you back? No.

Did I say whether the range would be reduced in the direction of gain or loss or right towards the middle? No.

Should you spend the **$50K**?

Let’s say you take the offer.

If the likelihood of loss decreases (i.e. you net **–$0.5M **to** $1.5M**), the economics change and your expected return is now positive.

If the likelihood of gain decreases (i.e. you net **–$1.5M **to** $0.5M**), you can choose not to make the investment and potentially save yourself from losing the rest of your **$990K.**

In the unlikely worst case, the uncertainty will range from **–$1M** to **$1M**, which leaves you in the same predicament, but *at least* it places a tighter limit on how much money you can lose.

Paying for uncertainty reduction isn’t a hypothetical example. We always pay for it. Whether it’s metrics or market experiments, you pay to acquire information.

**Principle #**2

*By measuring something, you are paying for uncertainty reduction*.

Measurements do not have to be perfect. It has *never* been about **eliminating certainty. **It has always been about **reducing uncertainty**.

## How to Quantify Uncertainty

The ranges of uncertainty I previously gave you are one way of quantifying uncertainty, but were all those outcomes equally likely? Probably not.

The way we account for that is by stating probability on some **probability distribution**. A probability distribution is a function that describes the probability of a random variable taking a certain value.

Let’s revisit our toy problem to see what it looks like with probability distributions.

**Scenario #1 — Maximum Uncertainty (Distribution)**

**Me:***“I’m 90% certain that you will net –$0.5M to $1.5M over the life of the project.”*

Figure 1 shows that **90%** of the possible outcomes fall between **-$1.5M** and **$1.5M. **This range of values is called the **90% Confidence Interval**. There is still a chance we might get more or less than these bounds, but this is a lot closer to how uncertainty works in real life.

The function we’ve used here has a Normal distribution (the good ol’ Bell Curve), but it’s important to note that this is not the only distribution that can describe outcomes. Whatever distribution you use, the area under the curve must equal 1.

This distribution tells us that **$0** is the most likely outcome, and outcomes farther from $0 become less likely.

**Scenario #2 — Reducing Uncertainty (Distribution)**

**Scenario #2 — Reducing Uncertainty (Distribution)**

**Me:**“*If you pay me $50K, I can do more experiments and reduce that range by $1M.*”

Let’s assume the happy case, where the $1M uncertainty reduction is from the bottom end.

Here’s where it gets interesting.

Now we can calculate the chance of getting a positive return by taking the area under the distribution where the net return is greater than **0**. In this case, there’s an **80%** chance we’ll get a positive outcome!

We can also calculate the **expected net return** of the entire investment by multiplying each amount under the curve by its likelihood and summing it up.

The Normal distribution is symmetric, so the expected value will be in the middle of the curve, or **$0.5M**. This means we can expect to make **$0.5M** from this investment.

How much money might we lose on this investment?

Overall, there’s a 20% chance of losing money, and the more likely losses are closer to 0. Multiplying the two factors together, we can calculate the **Expected Opportunity Loss (EOL), **which tells us that we can expect to **lose** about **$70K on average when the outcome is negative.**

**EOL = Chance of Being Wrong ⨉ Cost of Being Wrong**

If you’d prefer a more mathematical definition, EOL is just the integral of the probability distribution over our potential losses.

A rather interesting interpretation of this number is that this is the upper limit of how much you should be willing to pay for information to reduce your uncertainty further. In other words, this is the **value of perfect information**.

**Principle #**3

*You can quantify the value of further measurement is the chance of being wrong times the cost of being wrong*

This is where you have to ask yourself, **are you okay with this uncertainty? **

If you are, great! Make the investment.

If not, you can always reduce your uncertainty by making further measurements! But… I have some **bad news**…

As you try to get closer and closer to perfect certainty, the cost of uncertainty reduction (a.k.a measurement) shoots *way* up (to infinite and beyond).

The **good news **is that it’s very cheap to reduce your uncertainty when you have lots of uncertainty.

Anyone who’s done an A/B test has experienced this first-hand. It takes a lot longer and costs a lot more (sending people down the less optimal path for the sake of measurement) to gain that last couple percentage points of confidence.

As shown in Figure 4, there’s a sweet spot where you can get lots of uncertainty reduction at a low relative cost. That’s the range you want to aim for and is the reason why small samples can be so unreasonably effective!

**Principle #**4

*When you know very little, even small measurements greatly reduce uncertainty.When you know more, uncertainty reduction costs more.*

Another key distinction is that you *only* might want to do further measurement if your outcome straddles some decision threshold and you’re not okay with the risk (like our **$0** mark in **Figure 3**). If the range of our outcome were **$1M** to **$3M**, the decision would be trivial.

**Principle #**5

*You only need further measurements if your uncertainty straddles your decision boundary and you’re uncomfortable with the risk of a wrong decision.*

The key thing to take away is that uncertainty is a property of the decision-maker. The real world is going to do its own thing. Using measurements, you can reduce your uncertainty about outcomes. All this means is that you will become more confident about a smaller range of possible outcomes.

## In Conclusion

We’ve covered a ton about the concept of measurement:

- You can quantify your current uncertainty by using probability distributions.
- Uncertainty reduction as a result of measurements can be quantitatively expressed using probabilities over our outcome variable.
- You can quantify the value of additional measurements by leveraging Expected Opportunity Loss.
- The cost of measurement rises to infinity as we attempt to reach perfect certainty.
- You can greatly reduce uncertainty with a few samples when we know very little.

This is a great start, but we need to answer a few more questions before we can start talking about the nuts and bolts of measurement techniques.

I’m sure you’re wondering, “Is everything really measurable? What about the fuzzy intangible things?”

That’s the topic we’ll dive into in the next post, so stay tuned!