Support prioritization, part two

This blog post is based on a workshop that Brian presented at Support Driven Expo in Portland, Oregon, titled "Turning Feedback Into Action." Part one of this series can be found here

Last week, we talked about how to set up a prioritization matrix, define severity and impact, and determine an issue's support prioritization level. It's an effective way to communicate the importance of problems or feature requests that customers write in about. However, communication is a two-way street, and the internal teams on the receiving end need to know how to interpret this information and act on it uniformly. Otherwise, everyone will put in extra work determining the severity and impact of every problem and assigning support prioritization levels on their own with no change to the results of their collective work.

Once upon a time, I worked on a support team that worked closely with a product and engineering team. We had the same problems I described in the beginning of the previous post: unclear communication, missing follow-ups between teams, and seemingly ignored bug reports. My solution at the time was to create and implement a support priority matrix. A support teammate and I spent three weeks defining impact and severity for the product we were supporting. We assigned support priority levels to the resulting matrix and went back and forth tweaking the numbers until we felt it accurately represented the ranking of incoming issues. Then I took it to the Head of Engineering and explained what we had built and requested that engineering use it to decide how to prioritize work.

The conversation did not go as planned. He said no.

closeup photo of street go and stop signage displaying stop
Photo by Kai Pilger on Unsplash

The Head of Engineering did not agree with our definitions and didn't think the resulting matrix represented the priorities of the engineering team or of the company. Rather than take it to the teams and give it a try, he said he wasn't interested, and that was the end of it.

This went poorly for a couple of reasons, the biggest being buy-in. I presented the matrix as a tool to implement but wasn't clear about why we wanted to implement it or how it would benefit other teams. It looked like support was telling other teams how to do their work, and nobody wants to be bossed around by another team. If you plan to implement something like this, you'll need to do better than I did that first time and work with other teams as early as possible.

To be most successful, other teams should be involved in defining your support priority matrix, not just bought into it. To do this, include them in the process early. Before you have everything defined or know what the end result might look like, talk to people on the engineering and product teams, the marketing and sales teams, and whatever other teams may be affected by this process in any way. The biggest benefit of this tool, perhaps surprisingly, is not that bugs will be fixed faster and higher impact features will be shipped more frequently. It is that, maybe for the first time, people across the organization and company will have a discussion about what is most important to the customers and to the business. Again, everyone at the company has an idea of what is most important, but not everyone has the same idea unless you talk about it. Work with people across organizations to find a shared understanding of what "severity" and "impact" mean, and do it before assigning priority levels to the matrix. While the support team may be the first to call for this project and process, no single team should dictate the results.

Years after the process went south with that Head of Engineering, I had a chance to try it again at another company. I thought hard about what went poorly and what went well previously, and I followed my own advice above. This time, support engineers worked with application engineers and product managers to define impact and severity across the organization. Once they had a set of definitions and categories that everyone agreed with, the support team used a preliminary matrix to review a few months of customer-reported bugs, categorize them, and assign support priority levels. Then, the engineering and product teams sorted through the data, worked with the support team to tweak some of the priority levels, then stepped back to see how many bugs of each priority level came in weekly.

The engineering team decided not only to use these to prioritize bug-related work, but also added internal service-level agreements to the highest levels. This would likely never be successfully dictated by the support team, but it is one way other teams can use the support priority matrix effectively. For example, an engineering team might want to agree to resolve all SP0 bugs within 2 hours, and all SP1 bugs within a business day. You don't need to expose these SLAs to the customer, but by communicating importance and expectations between teams, they help hold teams accountable to each other, and ultimately, the customer. This time when we implemented the support priority matrix, it was developed by a cross-functional team. Support used it to help engineering and product teams prioritize work, and engineering and product teams used it to set deadline expectations with the support teams. It was effective enough that the following quarter, some engineering teams used the priority matrix and issue resolution SLAs as part of their teams' quarterly goals.

pack of dogs pulling a sled
Photo by Fox Jia on Unsplash

Getting other teams bought into the process and involved in the execution is a huge step towards success. When things start to work within your new process, you'll see resolution times and team frustration go down. However, the work won't be finished. After using the priority matrix for a few months, you should iterate. Re-evaluate everything, including your definitions of impact and severity, the categories you've broken them into, and the priority level assignments within each part of the grid. Depending on your product or service, you may even want to expand the matrix. A four-by-four grid is a good starting point, creating enough granularity to be meaningful, but small enough to be understandable. When you revisit, consider whether expanding to a five-by-five grid is necessary to cover the cases you and your teams see.

Another way to iterate on the matrix is to add more axes. I often hear that customer pain and business impact aren't included in the original impact and severity axes. This can be useful for enterprise products where the customer is not the end-user. One way to add those particular axes is to create a separate matrix for Customer Priority. The axes for this matrix will be Customer Severity and Customer Value.

Customer Severity is similar to the severity level in the first matrix, but meant to define how the customer's product is affected by changes or bugs in your product, which can be difficult to include in a stricter definition of severity. For example, a customer's product may depend heavily on a feature of your product that you don't consider critical. In that case, a bug may be rated as "high" in severity but "critical" in customer severity. Using this method, a bug can have different customer severity levels depending on who reports it or who it affects.

Customer Value is perhaps the most difficult to define because it tries to capture how important a specific customer is to your business, but to be clear, it should not imply or refer to a subjective judgement of any customer. Instead, define the impact to your business in terms of dollars, market growth, or another metric. While a customer's value may be a function of their size, this is separate from the impact of the first matrix. As an example, in our previous post we talked about one client with 100 users and another with 1000 users. If the smaller client reports a bug that affects only their account, the impact of the bug is 100 people. If the larger client reports a bug that affects only their account, the impact of that bug is 1000 people. However, if the smaller client has the potential to onboard a million more users, but the larger client will likely not grow much, then the smaller client will have a larger customer value.

Customer Severity and Customer Value are difficult to define, so I recommend starting without them and only adding a second customer matrix if you feel the original matrix isn't capturing enough data for your teams. The two matrices should not be added together to create a four-axis matrix. It's usually easier to have separate Support Priority (SP) and Customer Priority (CP) levels than to manage and communicate the fluctuations of a four-axis prioritization matrix.

If your teams implement a priority matrix like this (or totally unlike this), we'd love to hear how you did it. What worked? What didn't? Find us on Twitter and tell us all about it. Have any questions about this or want help implementing a prioritization process at your company? You can also send us a message on our contact page.