METRICS & ITERATION · LESSON 06.02intermediate

OKRs — deep dive.

Writing key results that actually measure the objective.

↳ tl;dr

The hard part of OKRs isn't the framework — it's writing key results that genuinely measure the objective. Common failures: KRs that are tasks ("launch the redesign") instead of outcomes ("increase activation by 20%").

What makes a KR a real KR

  • Outcome, not activity. "Reduce time-to-value to under 5 minutes" not "ship onboarding redesign."
  • Measurable. A binary or numeric value an outsider could verify.
  • Time-bound. By the end of the quarter, this number = X.
  • Limited count. 3–5 KRs per objective. More dilutes focus.

The activity-vs-outcome test

Ask: "Could we hit this KR and the company still be no better off?" If yes, it's a task disguised as a KR. "Ship the new dashboard" could be hit while no one uses the dashboard. "30% of weekly-active users open the new dashboard" can't be hit without success.

the cascading-OKR antipattern

Some orgs cascade OKRs strictly: company OKRs → team OKRs → individual OKRs. This kills agility — by the time team OKRs are set, company OKRs are stale. Doerr suggests connect, don't cascade: team OKRs reference and ladder up to company OKRs but are set in parallel.

Quarterly cadence

Set at quarter start. Mid-quarter check-in to grade progress (typically 0.0–1.0). End-of-quarter scoring + retro on what worked and didn't. Quarter+1 OKRs reflect lessons. Annual OKRs at the company level can pair with quarterly ones at team level.

// sources

Sources cited

  1. [01]
    Measure What Matters

    Doerr, J. · Portfolio · 2018 · retrieved 2026-05

    Doerr documents OKRs from Andy Grove → Intel → Google.

  2. [02]
    High Output Management

    Grove, A. · Vintage · 1983 · retrieved 2026-05

    Original documented OKR thinking, applied at Intel.

// sources

Further reading

  1. [01]
    Measure What Matters

    Doerr, J. · Portfolio · 2018 · retrieved 2026-05

    Doerr documents OKRs from Andy Grove → Intel → Google.