Metrics are used to reduce and not increase complexity, therefore you don’t want to measure everything. But how do you define what to measure? In a previous post we talked about the importance of asking the right questions to define whether a metric is good or not, now it’s time to show you how to define a product’s performance metrics and how to act on them.
Prioritizing your product performance metrics
You’ve probably noticed that in our previous posts, we’ve been flirting with the idea of your user going through a journey, from when she gets aware of your product, all the way to when she uses it for the first time and eventually comes back. In order to define your key metrics, let’s stick to this logic and think about the different phases of this journey starting with your ultimate goal, making money:
We live in a world that for better or worse values turnover and the ultimate definition of value in our system is money, therefore it’s important to measure your product’s capacity to generate revenues. A good example? Revenue per user.
No matter what, your first goal is conversion, or driving the “first use” or “first purchase“. Here you want to measure the conversionrate, or the number of people who are visiting your site or store and actually achieving your product’s goal. If the number is too low, you must investigate the reasons why your users are dropping off without using or purchasing your product.
4-) Referral and Acquisition
Acquisition, and eventually referrals by existing users, is the first contact a visitor has with your product, be it on a landing page, online or app store. Here you want to measure the number of visitors over a period of time, preferably as a rate, as it shows your capacity the get prospect users’ attention, or eyeballs. But you don’t want to fill a leaky bucket, so if you aren’t converting visitors into users, you might slow down your acquisition strategies to see why your visitors are dropping off. In the end, acquisition is only useful when you can assure value/make money, and a leaky bucket may quickly empty your pockets.
Acting on your metrics: using metrics to improve your product
Once you’ve defined your metrics, it’s time to use them to improve your product. We’re going to cover this part more deeply in the next articles, but in order to get you acquainted with the methodology, we want you to remember that metrics are more effective when they’re comparable.
Making your metrics comparable
So the first step is to define the ways you can make your metrics comparable:
- Past performance: you can compare your metrics’ current performance to past performance. For example, you could measure your conversion rate from one month to another.
- Goals: you could set a future goal based on your business objectives and compare your metric to it. There are many ways to set a goal. You could drill-down a high-level goal such as monthly revenue to the point that you know which conversion rate you’d have to have to achieve for a desired monthly revenue. You could use an external benchmark, for example, from a competitor, in order to get an idea of what your performance should be. What is important to retain here is that goals have to be SMART (specific, measurable, agreed upon, realistic and time-based).
The second step is to start using your metrics to do experiments on different parts of your product in order to observe whether or not there’s an impact on your metrics. A good way to run such experiments is by conducting A/B tests. An A/B test, or split test, is a controlled experiment using two variants, A and B, to understand which one has a better performance, or a better impact on the metric you want to improve.
For example, you have realized that you have lots of visitors, but they aren’t downloading your app. You know that the nature of your app makes it “sticky,” once a user downloads it. It’s very hard for them to stop using it. You thus formulate a hypothesis whereby you want to offer a 30-day free trial period to incentivize new users to sign up. In order to understand whether your hypothesis is impactful, you decide to run an A/B test comparing the version without a free trial (A) to the second version with a free trial (B).
- Version A requires that a user fill out credit card details when signing up.
- Version B does not require the user to enter credit card information, but is the same as version A in every other way.
After running the A/B test for one month in order to collect enough data, you compare the signup rates for both pages and discover that version B is converting users at the signup stage by a rate 3x higher than version A. The information collected during the test enables you to confirm a change to a specific part of your product that has a direct impact on the metric you want to improve.