Written by Better product building, Metrics

NPS isn’t the metric you want it to be

Reading Time: 9 minutes

Or: Why I’m a detractor of NPS itself.

Net Promoter Score is a de-facto metric for tech companies to measure how much people love their product. If you’re reading this you probably know what NPS is, or can easily find out so I won’t explain how to calculate it. Just remember that it asks users:

“On a scale of zero to ten, how likely are you to recommend our business to a friend or colleague?”

The big question

It then does some basic but funky maths to return a number from -100 to +100. This number is your NPS score.

It’s easy to see why people love NPS as a metric:

  • It’s a broad measure of how users feel about your company, so you feel secure that it’ll tell you if anything is on fire 🔥
  • It asks how likely they are to recommend you, which means if you nail this metric you get free viral growth 📈
  • It helps you to hear directly from customers what they think – and you know you need to be a customer-focused company because every company should be 🌱
  • It’s easy to implement – you can just send an email out to customers, you can even do it manually 👶
  • It’s ubiquitous – everyone uses NPS, so you’re safe to talk to investors and the public about it, because it’s already been socially validated 🌍

Even as I write/emoji that out, it’s hard to argue against the utility of NPS, but I’m going to anyway. More specifically, I’m going talk about why it’s probably not the all-encompassing product metric that we all want it to be.

To be clear – I know of companies that have used NPS very successfully. However I’ve far more frequently seen it misused as a product metric, and I want to talk about that, and how to avoid it.

What you want from a product metric

For a metric to be useful in measuring product improvements, it needs to do at least some of the following things:

  • Give you a clear message 📝
  • It’s diagnoseable and guides you what to fix 🛠️
  • It measures something that you care about ❤️
  • You can move it, and measure that movement 🌡️

So with that said, the below is how I believe NPS fails to give you the above (followed by suggestions of how to get it).

The issues with NPS

It measures EVERYTHING: Your respondents are thinking about the whole experience – the price, the brand, the marketing channels, the personality of your CS and sales people, as well as the product. This isn’t necessarily a bad thing, but it makes it very hard for any product team to improve (or even understand) the metric in isolation.

You’re really measuring:

  • Your brand — does it represent what I stand for?
  • Your messaging — does it breed communication and conversations?
  • Is your problem one for general discussion or is it taboo?
  • Is your problem currently relevant to naturally come up?
  • Is there competition – do I risk giving the wrong recommendation?
  • Is your solution idea novel, interesting or discussable?
  • Is your industry naturally social, or is it taboo to talk about?
  • Does your product truly solve the problems I have?

The last point is the question you typically want the answer to, but the other factors blur what you learn from NPS.

It asks multiple things at once: “How likely are you to recommend us to a friend?” has several ambiguous aspects, each adding noise to the metric and covering the truth that you’re looking for.

For example, my answer depends on the likelihood of:

  • me having friends with the same needs (and ability to pay)
  • us having a conversations where this comes up
  • me actually referring the service on merit

The thing you typically care about is the last point, so why introduce the noise of the previous questions?

I’ve genuinely seen responses that say:

“0/10: I simply don’t have any friends to refer”.

Granted, they’re not too frequent, but why willingly introduce this ambiguity?

I’m just not comfortable pinning my success or failure as a product person on a metric that I don’t believe the data-points (humans) themselves all understand the same. I’ve seen enough bizarre verbatim responses to believe that it’s a confusing question.

It can be an excuse to do anything: Good product builders should always have a logical strategy for building what they’re building. Does it come down to getting new users, or improving retention? What is the insight you’re using to make that call?

With NPS it’s so broad (and hard to provably influence) that you can use it as a reason to propose literally any change. Sometimes it’s a weak argument, but often it’s hard to prove either way, which can lead to teams building things that don’t have impact.

It’s arbitrary: Someone decided that a score of 8/10 is ‘neutral’, which is a pretty big statement. For me, 8/10 is a ringing endorsement. If I see a film critic give ⭐⭐⭐⭐ I don’t think “wow, that’s a neutral review”.

Likewise, assigning 6/10 and 0/10 as the same value of ‘detractor’ is a sweeping generalisation. Simply put, I don’t believe these categorisations are helpful in understanding your users.

This highlights how subjective the scoring framework is compared to user behaviour metrics and actions, which more closely measure a truth.

It’s not a product or service question: At it’s core, it’s a marketing question. Yes, product and marketing are largely inseparable, but is it really a great idea to measure your product quality and execution upon the virality of your product? Not always.

If NPS is your core metric, it means you’re focussing on the benefit to yourself, rather than your customers.

It’s highly susceptible to bias: Oh-so much bias. Firstly, you’re biased by which customers respond. You’ll naturally get higher response rates from some customers, due to intrinsic factors (some people are simply more likely to respond) and also from experiential factors (great or terrible experiences are most likely to get feedback).

Secondly, as your customer makeup changes, NPS will naturally change too, which makes it so much harder to be a clear product metric. In my experience, US customers are the most likely to give 10s, Indian customers would very rarely give a 9 or 10, and EU customers tended towards the middle, all after receiving a similar experience.

It’s easy to do incorrectly: Or at least, I’ve seen many companies do it incorrectly and not really care. To give a pet-peeve example, if there is % sign after your NPS score, you’re doing it wrong. It doesn’t really matter, but it removes any of the value of having an industry standard metric, and risks different calculation by different people.

And if you can’t do industry comparisons, why are you bothering with funky maths and an obtuse question?

Standardising costs uniqueness: Using NPS means that you adhere to a standard, which may mean a missed opportunity to tailoring your metric to your business.

Maybe your business requires evangelists, which requires as many 10s as possible? Or maybe your business is high-risk in which case you only care about not getting 0s? Or maybe you only care about feedback, in which case just ask for that?

It’s noisy because it’s subjective: You’re not measuring users actions, you’re measuring their sentiment. This makes it naturally more noisy than one like acquisition/retention – this is why good product people don’t like to ask if a customer would buy, they prefer to launch and find out for sure.

This noise means natural fluctuations, and it’s that much harder to get meaningful on any changes you make. This also makes it hard to measure progress over time.I

It doesn’t make you any more customer-focused: At its core, NPS is simply a customer feedback survey. Using it as a core metric won’t make your team care any more about customers than using literally any other customer feedback survey.

For example, TransferWise have successfully used NPS as a key metric because they already cared so much about their customers. To give an example of how much, I asked an ex-colleague what they would miss most if they left TransferWise, and they genuinely answered “the customers”.

It doesn’t have that impact unless you are obsessive about giving your customers what they want.

It’s not really customer-focused: The question you ask is aiming at finding out if you can get free acquisition. You’re not asking people if they’d use again, or how their experience was. You’re asking about what you really care about – getting more users.

It doesn’t tell you how to improve things: This is one of the most important for me. The NPS question alone doesn’t actually give you guidance on where you need to improve. People often respond that they ask a follow-up question of “why did you give that answer?”, but why not ask that question first? Why put the barrier between yourself and learning about how to help your customers?

Even with the follow-up question, diagnosis can be difficult. How do you differentiate between customers that mentioned price as a positive vs a negative? You could keyword-search and split by detractors/promoters, but I’ve seen many 6s give compliments, so it paints a fuzzy picture.

I believe that you are actively learning less about your customers by asking the NPS question.

It especially doesn’t make sense B2B: Asking how likely someone is to refer a friend or colleague for a work tool gives you a picture that is mixed at best.

I use Photoshop every day in my job and I love it, but none of my friends need it for their jobs. Specifically for colleagues, the NPS score is biased by the commonality of the task that I use your product for. This will give you a confusing message when using NPS to determine product health.

Also, with B2B companies your users and decision-makers are often different people. For example, at OpenTable you might see management loving the service because it brings in more customers. Conversely, front-of-house staff might not like it, because it increases their workload for little personal reward. When you ask the NPS question it’s vital to know who is answering, and what their motivation is.

You don’t really gain that much from comparisons: It’s easy to tout Apple’s high NPS as an indicator of success, but how much does cross-industry comparison do for your company?

B2C and B2B will have drastically different NPS standards, as will different industries – I’m more likely to recommend a travel app than I am to recommend a sex toy. You can get some value from industry comparisons, but it will be limited.

I don’t believe it (alone) drives referrals: It’s easy to think that a high NPS will mean your product is going to go viral. However, you still need to make it easy to share, and incentivise people to recommend their friends, even with a fantastic NPS score.

Referrals aren’t relevant for all companies: On the B2B point again, should an enterprise sales tool really care if people refer their friends or colleagues? They’re unlikely to refer decision-makers, so do you really care? If you care about the experience and improving it for them, ask a question around that, rather than the NPS one. In my experience, B2B decision-makers are far too low in number to be able to give you a good NPS signal.

Don’t get me wrong, all companies should aim to please users and get viral growth. But not all companies should be using this as a benchmark of whether or not they’re doing their job well. Some companies are sales-led, and some companies don’t need that viral component. I’m just not sold that viral growth is not a metric for every tech company.

Easy to defraud: I know of people that have harmed their company in order to hit their own targets (incentivising through bonuses is especially bad for this). If my target is to improve NPS, I can do that by sending a different subject line, or a different subject line to different users, encouraging responses from those that I know are likely to give better responses.

Normally you’d prevent this by having what I think of as anchor metrics – things that you want to hold steady while you improve a metric (for example I want to increase sales without harming retention). But it’s hard with NPS to have a counter-metric, as it’s so broad. This means that you rely heavily on trust.


The bottom line is: NPS gives you at best a mixed signal about how your product is faring, and where you should improve.


Lots of complaining there – what can we do instead?

What’s better? Binary

If you’re asking customers whether or not they would recommend your product; I’d love to see a binary response and an unambiguous question:

If a friend asked to get across the city at night, would you recommend Uber?

A binary ✅ or ❌ here can give you a much more direct answer than a 0–10 scale. It would make the also data far more human-readable, rather than worrying about seeing a negative number and prepare to throw your codebase off a bridge.

Yes, an unambiguous question like the above might risk blind-spots (e.g. what about daytime, or short distances?). But I’d much rather have clear, known blind-spots that can be addressed, rather than a broad fuzziness where I can’t see anything.

What’s better? Ask what they’d improve

When you do customer interviews, it’s rare to ask how likely they are to recommend to a friend. Far more common questions are:

“How do you feel about the product?”

“Would you improve anything about the product?”

“What did you expect to happen when you clicked that thing?

These can help get straight to the point — are you adding value through your product? Is it perfect? What is missing?

These type of questions could help focus on the customer value and benefit, rather than simply whether or not the customer will create growth for you. They’ll simply give you a much better signal of what works.

What’s better? Repeat usage

I believe that repeat usage is the only true demonstration of product value. Your product solves a problem for customers. Unless it’s a one-off problem (which is rarely the case), your most demonstrable value proof is your solution enough that it becomes their new way of doing things.

That’s the result that your product needs to achieve — the point at which the cost of switching (time spent learning/researching and monetary cost) is lower than the benefit you have provided.

Repeat usage is still influenced by external factors, but at least they focus on the user. And if you can’t wait to find out if they’ll use you again, you can always ask them.

What’s better? Ask how they’d feel if it were gone

SuperHuman were wrestling with how to measure product-market-fit, and came up with their own metric where they ask users how disappointed they would be if they could no longer use SuperHuman. While still having the bias of being a feedback survey, it does get closer to the core of the issue for users, and focusses on their needs.


It’s entirely possible that NPS is the right metric for your company, but don’t use it just because it’s currently the industry standard. Just make sure to find a way to understand what your customers want, and then measure if your improvements make them happier. That’s the most important thing.

Last modified: April 16, 2019