The Hire product sits on two equally important facets: Technology and Service. The Technology to allow data to flow from the end users to Onfido and to official service providers is aided by a Service to ensure data is complete and consistent, and help end users succeed when they reach out for help. None of these facets can ensure a “scalable, repeatable and trustworthy” product by itself. Furthermore, the functions responsible for each of them can only succeed so much if they work in silos.

Working strictly together on a daily basis isn’t always viable. Different functions may have inherently different ways of working or other barriers to constant collaboration — in our case, the Technology team was based in Portugal, whereas Service was in the UK. To prevent this gap from creating silos we need to ensure that, in their different functions and ways of working, everyone’s on the same boat—heading to the same port. OKRs, when used wisely, are a great tool to drive this.

At least one Objective (and respective Key Results) should be shared among all functions involved in your product line.

Let’s now go through what I found were the key principles for these shared OKRs to have succeeded in ensuring we had the most positive impact possible. In this Part 1 you’ll see 3 principles focused on the OKRs themselves, and in a soon upcoming Part 2 I’ll share another 3 principles focused on the team/organisational context around these OKRs.

#1) Your (Cross-Functional) Objective Should Be Stable

It’s not that you shouldn’t change your Objective no matter what. Stable doesn’t mean written in stone. It’s more like:

Your Objective should be such that it stands the test of time.

(Having the remaining 5 principles in mind will take you a long way on this one.)

Let’s assume you revisit OKRs on a quarterly basis. Revisiting is not (necessarily) revising. If you find yourself every quarter feeling like you have no other way than changing your Objective, then either your Objective is not well-framed or you don’t quite know what you’re working towards.

In the Hire product line, we kept a cross-functional Objective for three quarters in a row — Increase applicant completion. Applicant completion is technically measurable, but it doesn’t work as a Key Result. There is so much going into that number that it isn’t really useful as a driver for improvement (as a Key Result should). Let’s look at the main flow in this product.

Applicant completion is key for everyone: us, our client, and the user being verified. The main risk of drop-off is at the “User fills in form” step— either the first time we send it, or when we send it again (reopen) to fix some issue with the information provided. We can thus focus our efforts on this step and measure our success, not by the things we do, but by how we move the gauges that affect user completion:

decrease % of users who don’t fill in the form when we send it the first time;

decrease % of users for whom we need to reopen the form;

decrease % of users who don’t fill in the form after having it reopened.

This is what we want to see in our Key Results — which leads to the next point.

#2) Key Results Are Not Key Deliverables

Of course Technology (Product and Engineering) can deliver product improvements that purportedly decrease drop-off—but the Key Result is the decreased drop-off, not the features.

Of course Service can improve processes in a way that reduces the incidence of reopens and/or the likelihood of reopens causing drop-off (e.g. by making reopen message sent to the user clear and informative)—but the Key Results are the needles moved, not the process improvements.

Key Results should reflect the ends, not the means.

Even when the Objective is stable and well defined, it’s easy to define Key Results based on outputs (instead of outcomes), turning them into a sort of waterfall-ish plan for the quarter. Although doing the opposite (focusing on outcomes) is literally “textbook OKRs”, the pull to outputs is too strong—especially under certain organisational contexts (more on this in Part 2).

Real examples of such terrible Key Results are:

Deliver new feature A (with an OKR score of 0.5 for a half-done feature…)

Deliver X new features

Introduce Y process improvements related to F

Why so terrible?

They don’t promote cross-functional collaboration. They don’t promote learning. They deal terribly with change.

Cross-Functional Collaboration (and Learning)

When you’re improving a product based on the outcomes you want to achieve, you can’t be dogmatic about how you’re going to do it. You come up with assumptions, and you (dis)prove them as quickly as possible to either change your product accordingly or move on.

Output-based OKRs are assumptions in disguise.

Let’s imagine you learn soon in the quarter that your assumption was wrong, and the impact you thought you’d have with a Technology change (a new feature) is actually achieved with a Service change (a process improvement) — or vice-versa. Now you either:

a) change the Key Results mid-quarter (cheater!) or

b) you’re left with a Key Result that is either doomed to stay at 0.0 (because you learned you were wrong soon enough to not go there) or gets an undeserved 1.0 (because you went all the way to find you were wrong). You end the quarter with an OKR score that doesn’t really reflect how much you and your teams have learned and have made an impact on what really matters—the outcomes, and not the outputs.

More on Learning

Learning also relates to how stable your Key Results can be quarter after quarter. Assuming a stable Objective, Key Results can be more or less stable depending on the phase the product is going through and how they scored at the end of each quarter. You may keep a Key Result and only change its target if it’s based on a core KPI of your product that you want to keep pushing forward. Here is a real example of our own.

(Not real numbers.)

From Q1 to Q2, we kept the Key Results but:

we made the reopening Key Result less ambitious—because in Q1 we learned we were aiming too high;

we made the drop-off Key Result more ambitious—we made a significant positive impact, but knew that we could and should do more.

Dealing with Change

Plot twist: during Q3, Onfido officially decided to discontinue a big part of the Hire product line to focus on our vision for identity verification. We naturally reduced the Technology team dedicated to this product line—but the product was still there, and so was its mission to provide a delightful product to our customers!

If our OKR was around outputs, we would once more either change it or stick for two thirds of the quarter with Key Results whose final score would be deceitful and wouldn’t provide any learning.

Since we had a shared OKR that translated the common goal we were all working towards, we didn’t modify it even a bit in the face of change (and what a change!). That’s still the impact we want to have in this quarter—so we only course-corrected our plan to move that gauge. With a reduced Technology team, we focused on the few most impactful changes to the product, while keeping the ongoing improvements on the Service side. At the end of the quarter, we scored 0.6 on both Key Results.

#3) Measure Once, Cut Twice

No, that’s wasn’t a Freudian slip: it’s really measure once, cut twice. When two groups of people (two teams, two functions) are aiming for the same Key Result, they must look at the same number.

It should be based on the same variable, summarised in the same way (average, median, ...) and over the same period (last week, month, ...). Much can be said about the correct way to do this, but that’s beyond scope here. What I really want you to take away is the importance of having one sole source of truth for each Key Result’s score (which different people will influence in different ways).

You can achieve this is by assigning ownership of each Key Result to whoever is closer to the underlying KPI. In the above example, Technology was closer to the “first-time drop-off” Key Results, whereas Service was closer to the “reopening” Key Results. The owner is not more responsible than others for how we score on that Key Result — but they should be the ones leading on:

ensuring the underlying KPI is measurable; measuring it to establish a baseline; defining targets, so as to map KPI values to KR scores; checking in on the KR score.

You should do #1 and #2 in advance to when you intend to close your OKRs for the quarter. You’ll often find that the KPI that matters isn’t immediately measurable, and it’s better to have time to make it measurable than to settle on a less impactful KPI just because it’s measurable.