Warning: You’re Losing Money by Not Using the right Agile Metrics

The Agile metrics you use to measure Agile teams matters. A lot.

What makes a great Agile team? While there’s the idea of self-organizing, continuous improvement, and ever increasing technological growth, there’s really not a universal way to tell if a Agile team is ‘great’. For example, the business wants to release capabilities A, B, and C by June 30 and the Agile team doesn’t meet this date because the stories blew out, is this a great Agile team? Or how about the team that’s meets every business delivery date but the product doesn’t attract the number of paying customers hoped for, is this a great Agile team?

In truth, there are as many ways to measure an Agile team’s success as there are Agile teams. So if it’s impossible to have a standard measure then what’s a business to do?

The best answer is to tie the success of the Agile team to the success of the business. If the business’s strategy is to grow the number of customers then measure the number of new customers the Agile teams’ latest product features attract. Even if there are 20 teams working on one product, each team can work their new features to attract more customers. Infrastructure teams make the product faster, more robust, and multi-platformed in the back-end. Although customers can’t directly touch these changes, they can sense or ‘feel’ these improvements. The Unique Value Proposition might be, “our product is 3x faster than our nearest competitor!”

A very common occurrence is the scrum master finds initial measures that other scrum teams use, for example, on this site. Although this sounds OK i.e. “we’ll use velocity”, the measure may not fit the business. I was working in a company where teams often received new ‘business critical’ work from outside the current sprint or project. This was work to help secure new customers or address customer requests immediately. The team used velocity to measure performance even when the team, from a project point of view, was having their sprint plans negated through these requests. This should have rendered velocity moot. What would have better suited the team and the business was a measure of how quickly the Agile team could respond to these ‘business critical’ items.

In most scenarios the business needs to grow to be successful. Andrew Chen writes about scaling user growth as an example. This doesn’t automatically mean the development team needs to double their velocity in the coming year. Building the wrong product twice as fast isn’t the answer. What company’s need to do for growth is experiment unceasingly and this means the product development team needs to ramp up to build, deploy, test, and redeploy quickly i.e Growth Hacking.

How well is the Agile team doing? The best measures are related to the business strategy and goals. There are several categories of measures that teams or business could use including: business, innovation, Agile, customer, or environment. The best way to select what metrics to use is to understand what’s best for your customer and your business.

Your key Agile team performance measures will come from the sample list of Business, Customer, and Innovation measures below. The best measures internal to the Agile team will come from Agile and Environment. If the business strategy or goals change, it’s very likely that your measures for Agile team performance will need to change too.

1. Business – return on investment, growth, and customer base & market

  • Net Promoter Score – measure a customer’s likelihood to refer your product.
  • Referrals – this is a raw count of customers who actually referred your product to someone else.
  • Revenue – the amount of revenue generated by your product after new idea or update is released
  • Acquisition – number of new customers to your product after of new idea or update is released
  • Solving Your Problem Index – Customer’s rating on how the MVP is solving their problem
  • Customer lifetime value – Is this changing due to the introduction of a new feature or product line?
  • Rate of Sales Pipeline Growth – Has the new feature or product line accelerated sales growth?
  • Adoption/Use of New Feature or update– Are the customers and users using the new feature or update?
  • Customer/User Feedback from Each MVP – How well received was the MVP?
  • Market Size and Market Share – Is the new feature or update making the projected impacts?
  • Retention – Has the change brought back ‘lost’ customers?

2. Customer – satisfaction with the product and timeliness of deliveries

  • Count of positive customer testimonials – For each update or new idea, are your customers saying good things.
  • Count of negative customer testimonials – For each update or new idea, are your customers saying unhappy things. You can leverage these to get ideas for upcoming releases.
  • How happy is the customer – Have your customers stopped looking for a solution to their problem to wait for your solution.
  • How unhappy would the customer be – If the release was delayed one month, how do your customers react and would they look elsewhere.
  • Percentage of customer base interviewed or involved in usability testing – This is the total count of customers and potential customers involved in your ‘customer discovery’ efforts. The percentage involved will be a statistical significant number to allow accurate predictions.
  • Number of customers who have the problem you plan to solve – The count of customers you know have a specific problem you’re trying to solve. This can come from product feedback, interviews with customers/users, or other survey techniques.
  • Number of customers who want your solution – Count of customers/users who want your solution to their problem. This can come from product feedback, interviews with customers/users, or other survey techniques.

3. Innovation – technology, adaptability, and markets

  • Count of core initiatives – Target up to 70%. Efforts to make incremental changes to existing products and incremental inroads into new markets. Low to very low risk. Enough return to keep the business going for a time but not enough to sustain the business indefinitely.
  • Count of Adjacent initiatives – Target up to 20%. Leveraging something the company does well into a new space. Shares some characteristics with both Core and Transformational initiatives. Medium risk.
  • Count of Transformational initiatives – Target up to 15%. Create new offers (or new businesses) to serve new markets and customer needs. High risk but big returns if successful.

4. Agile – practices, methods, and processes

  • Number of Tasks added/removed during iteration – How well team knows the work and what done means for the story – INVEST criteria being followed and stories are small
  • Number of stories forecast Vs actually completed during iteration – How well the team understands the work of the iteration – INVEST criteria being followed and stories are small
  • Average size of stories in iterations – The ability of the team to make the stories as small as possible and still provide some value to customers/users
  • Average score of “Value” per user story – How well the team understands the customer problem and understands how the customer wants it solved (or what success looks like from the customer/user POV)
  • Percent of user stories in iteration covered by automated System/Solution tests – The team’s journey to automate end-to-end tests at the System/Solution level.
  • How independent is each user story in the iteration – How well the team understands how to remove complexity
  • Test First Vs Test Last testing done in iteration – Measure the team’s journey to be outcome driven at the System /Solution level.
  • Number of hours in iteration waiting for something – Measure of cycle time for same sized stories or epics.
  • Partially done work – By counting stories that are in progress at the end of a iteration we highlight the potential waste of incomplete work. This count is minimized by having small stories and having the development team managing their iteration backlog.

5. Environment – team working conditions and happiness

  • How happy are team members – This measures the general fun factor of work for the team. A happy team will work smarter and deliver a better product.
  • Number of hours in iteration spent helping team members grow and be better – Measure of an aspect of a self-organizing team: caring and improving the team.
  • Are you adequately served by management – Measure of how the team feels management is there to support them.
  • Frequency of team making product decisions – If the team is making most product decisions, they’re more likely to be enthusiastic about the product.

Author: Robert Boyd

I’m a CSP (Certified Scrum Professional), CSM (Certified ScrumMaster), and CSPO (Certified Scrum Product Owner). For 30 years I’ve been streamlining processes and systems. I’ve introduced agile methodologies to software and product management departments, resulting in a 300 percent increase in feature deliveries.

Leave a Reply

Your email address will not be published.