Figure 10-13. Ratio of Promoters to Detractors Compared to the Average Grade, where we "looked good," this representation (see Figure 10-13) made the Service Desk look incredibly good! And the funny thing was, I believe this was a more accurate representation of just how good they were.

After two years of battling this argument, I acquiesced and found a different way to represent that data. I still believe the promoter-to-detractor story is a good one (and perhaps the best) one to tell. There is an established standard of what is good that can be used as a starting point. Where Reichheld uses that standard to determine potential for growth, it works fine as a benchmark of high quality. That said, the few departments that could conceptualize the promoter-to-detractor measure invariably raised that benchmark way above this standard. One client I worked with wanted a 90 to 1 ratio. As a provider of fitness cla.s.ses, they felt highly satisfying their customer was their paramount duty and they expected that out of one hundred students, they receive 90 promoters, 9 neutral, and only one detractor (they changed their 5-point scale to a 10-point scale happily).

We ended up with a new measure-a new way of interpreting the data. We showed the percentage "satisfied" (Figure 10-14). This was the number of 4s and 5s compared to the total number of respondents. Definitely better than the average. The third-party vendor of the surveys had no problem representing the data (ours and for our industry) in this manner. They actually produced their reports in numerous forms, including this one.

Figure 10-14. Percentage of satisfied customers At the time of this writing, our organization is testing this view of the data. It still takes some acceptance, and I"m sure I"ll hear arguments that all of the data isn"t showing (4s and 5s are lumped together, and 1s, 2s, and 3s aren"t actually shown at all). Interestingly enough, I think this is more a resistance-to-change issue than it is a fault in any of the representations of the data. Most of those who didn"t like the promoter-to-detractor ratio (and those I antic.i.p.ate arguing against percentage satisfied) didn"t complain that the average did not have granularity. In each of these newer views, more data is provided than was given in the average, as shown in Table 10-7.

Weights and Measures.

With a complete set of measures in hand, our next step was to pull them together into a "metric." To be a Report Card, we needed to roll the data up as well as provide a deeper dive into the anomalies. So far our methodology afforded some rigor and some flexibility. Let"s look at each in this light.

Rigor.

Each metric used triangulation; each was made up of at least three categories of information (Delivery, Usage, and Customer Satisfaction) Information was built from as many measures as the service provider saw fit.

Each measure was qualified as exceeding, meeting, or failing to meet expectations.

Each measure was from the customers" points of view and fit under the rules for Service/Product Health.

Flexibility.

Each measure was selected by the service provider (our Service Desk department) Each data set was built into a measure per the service provider"s choice Expectations, while representative of the customers" wants and needs, were defined by the service provider and could be adjusted to reflect what the service provider wanted the customers" perception to be. For example, if the customer was happy with an abandoned rate lower than what the Service Desk thought was adequate, the expectations could be higher.

Another important area of flexibility for the service provider was the use of weights to apportion importance to the measures. This was first used in the Delivery category. Since Delivery, one of the three Effectiveness areas of measure, was made up of multiple measures-Availability, Speed, and Accuracy-the service provider could weight these sub-categories differently.

With an organization just beginning to use metrics, weighing these factors was not a trivial task. We made it easier by offering a recommendation. For support services we suggested that speed was the most important to the customer, and for non-support services availability reigned supreme. So for the Service Desk we offered the following weights: Availability: 35%.

Speed: 50%.

Accuracy: 15%.

These weights could be changed in any manner the service provider chose. The key to this (and the entire Report Card) was the ownership of the metric. Since the service provider "owned" the data, measures, information, and the metric-if they chased the data they would be "lying to themselves." Many of the admonitions I offered preceding this chapter was to help you understand the need for accuracy, but more so for the need of honesty. As David Allen, the author of Getting Things Done (Viking, 2001), has said, "You can lie to everyone else, but you can"t fool yourself." If the service provider is to use the metrics for the right reasons and the right way, leadership can never abuse or misuse them.

Even though I had the greatest trust in this department, I still stressed the importance of "telling it like it is." The department had to not only be "willing" to hear bad news; they had to want to hear it, if it was the way the story unfolded.

Weighing the factors can be an easy way to chase the data and make the measures tell the story you hoped for rather than what it is. One thing we do to make this less tempting is to determine the weights before looking at the data. Then, if we need to change them after seeing the results, it"s much easier to self-regulate the temptation to change the weights simply to look better.

This is another great use of the annual survey. We can ask the customer what factors of measure are the most important. Simply ask if Time to Resolve is more important than Time to Respond. You find out whether getting put on hold for 30 seconds is more troublesome than if they have to call back more than once to fix the same problem (accuracy).

Of course, you can weight these factors equally.

Along with weighting the components of Delivery, we can weight the three categories-Delivery, Usage, and Customer Satisfaction. Weights should be clearly communicated to those viewing the Report Card.

Let"s look at how we roll up the performance measures into a Report Card.

Rolling Up Data into a Grade.

It"s time to take the components we"ve discussed-the Answer Key categories for effectiveness, triangulation, expectations, and the translation grid-to create a final "grade." This includes a means for communicating quickly and clearly the customer"s view of your performance, for the staff, managers, and leadership.

You"ll need the translation grid (see Figure 10-15) as before but with neutral coloring so that it is less enticing to consider "exceeds" as inherently good.

Figure 10-15. Translation Grid The values I"m using do not reflect the information shared earlier. I wanted to make sure it was clear how to roll up grades and aggregate the measures. Table 10-8 shows all of the measures, their expectations, their actual values and the translation of that value into a "letter grade." These can be programmed into a spreadsheet so that you can have it calculate the grade for you.

In Table 10-8, the "grade" (shown in the Result column) has already been translated to a letter value-if the actual measure exceeded expectations, it earned an E, if it met expectations an M, and if it failed to meet expectations an O.

Within each item (information level), the total grade would be a result of an average using the translation grid. As mentioned, you can even use weights within the category. For example, abandoned calls that were less than 30 seconds could be given a weight of 85 percent, while the total number of abandoned calls could be weighted 15 percent. Another example is overall Customer Satisfaction can be given equal weight (50 percent) to the other three satisfaction question combined (16.6 percent each). For this example we"ll go with those two weighting choices and all others will be of equal weight within their own information category. Table 10-9 shows the next step in the process of rolling the grades up toward a final Report Card. Note: a double asterisk beside the grade denotes an O grade at a lower level.

Let"s look at the two weighted measures. If the Availability measures were of equal weight the total grade would simply be the average of the two grades, 10 and 5, giving a grade of 7.5. If we rounded up, then this would make it an E. But, since we always choose to err on the side of excellence, we don"t round up. You have to fully achieve the grade to get credit for the letter grade. If the weighting were switched (Abandoned Total = 85% and Calls Abandoned in Less Than 30 Seconds was worth 15 percent of the grade), you"d have an overall E since the 10 for abandoneds would give you an 8.5 before you even looked at the abandoned calls in less than 30 seconds.

In the satisfaction ratings, we find that the grade is an E even though there is an M. If, instead of weighing overall satisfaction at 50 percent, we gave all three questions equal weight, the grade would simply be the average of the four grades, giving us an 8.75. Still an E, but a lower grade.

Notice that the Speed: Time to Respond came out as an M, Meets Expectations, but I added the asterisk to signify there was an O hidden beneath. The same is done for the Availability total grade because of the E hidden beneath. It helps the viewer of the Report Card quickly note if she should look deeper into the information. Buried Es and Os are anomalies that need to be identified.

Let"s continue with these results. If we go with the weighting offered for Delivery (Speed at 50%, Availability at 35%, and Accuracy at 15%), we get the next level of grades for delivery, as shown in Table 10-10.

Let"s continue on in this manner in Table 10-11.

We again have decisions to make. Are Delivery, Usage, and Customer Satisfaction of equal value? This is only necessary because we are attempting to roll up the grades to a single grade. In my organization, we stopped at this level, choosing to keep these three key information categories separate, even across different services. So if we were to roll up three support services (Service Desk, second- and third-level support) we"d show a roll up in the Delivery overall, Usage overall, and finally Customer Satisfaction overall. We found the view of a service using this basic triangulation as far as we needed to go. If you want to roll it into a final grade (GPA), the only question left is the weighting. For the purposes of this example, we"ll keep it simple and make their values of equal weight.

So the final Report Card for the Service Desk, based on the weights for each category of information, resulted In an overall grade of M*. This can be interpreted easily to mean that the service is meeting expectations overall with some anomalies that should be investigated.

If you looked at the grades as my organization does, not rolling up the major categories, into a single grade, but looking at each major "subject" area separately, you would get the following: Delivery: O is an Opportunity for Improvement. Time here should be spent investigating the causes for the anomalies.

Usage: M means that we Meet Expectations. No investigation required in this area.

Customer Satisfaction: E means that we Exceed Expectations. Time here should be spent investigating the causes for the anomalies.

The final summary Report Card would look like Figure 10-16.

Figure 10-16. Report Card With each, prose should be included once the investigation has concluded. This prose communicates to the leadership what the service provider has determined to be the cause for the anomaly and any suggested actions to mitigate, avoid, rectify, or replicate the causes. These changes are to processes (not people) and should be designed to control future results.

If we learn from our past mistakes, we should not continue to repeat them.

Likewise, if we learn from our past successes we should find ways to make the anomaly into the common place (if it is deemed an equitable choice).

Recap.

The Report Card allows us to aggregate grades at each level of our metric. You can decide at what level to stop compiling to a final grade. You do not have to end with a single grade. This chapter has been a step-by-step example of how the concepts presented to this point can be (and have been) applied to create a service-level metric.

I have reviewed in this way working from data to measures to information and finally a metric. From measures onward, I showed how you will apply expectations to each so as to give context and meaning to each. Along with applying expectations, I showed a suggested method of normalizing the information across measurement types and areas. The use of percentages is only one means of consistency. As you use different measures, evaluating various services and products, you will find that this may not remain easy to do. In my experience, the measures used will differ in type.

Another tool of normalization is the scoring method, in which at every turn we seek to err on the side of excellence. This is why an Opportunity for Improvement is treated as a zero and only an Exceeds will balance it out to Meets. Since we don"t round up (again to ensure we err on the side of excellence), just one Opportunity for Improvement will keep the total (average) grade from ever being an M unless there is an E included.

What we want are Meets Expectations. Anything else is an anomaly and requires investigation.

Recall some of the following ways that impressions can be skewed: Artistic license-color choices, alignment, etc.

The scale used to represent the measures The format used to represent the data (ratio, average, percentage, etc.) The overall grade gives a "feel" for the health of the subject. Rolling up the grades into a "final grade" is possible, but won"t always be desirable. It depends on the audience.

The Report Card shows how to display, compile, and report the results of your metrics. It doesn"t go into the tools you may choose to use for gathering the data. The organization I used as an example had multiple automated tools, some human-interactive tools, and fully subjective surveys. Check your organization for existing sources of data. There are usually more places for you to gather data (un.o.btrusively) than you would expect.

Other tools for gathering data should be found and leveraged. The Higher Education TechQual+ Project is just one example of a (free) survey-based tool that you can use-even if you are not in an IT organization, the concepts of the tool can be applied to your services and products.

"It"s not what you look at that matters, it"s what you see."

-Henry David Th.o.r.eau.

Innovation is not coming up with a totally new concept or idea. The greatest innovations come from seeing things that already exist in a new light. It was a great strength of Thomas Edison to see what others missed in their attempts to invent something new.

Conclusion.

The Report Card is another mean of creating a scorecard with the added benefit of allowing you to roll up the measures into a single grade if desired. By identifying the anomalies in the measures and information, you can clearly designate which are anomalies and spend your time investigating only those that require the effort. This allows you to: Identify the areas needing further investigation Spend resources only where needed Have an overall "feel" for the health of a service or group of services using a common language for the evaluation The key isn"t in what tools you use to collect, a.n.a.lyze, compile or report the information and metrics you develop. The key is in finding a way of doing this work so that it is easily understood by your audience. Your metrics have to tell the proper story, in the right way, to the correct audience.

Advanced Metrics.

You might be thinking, "When should I look at using the other quadrants of the Answer Key?" I"ve tried my best to keep you from delving into the other quadrants before your organization has had time to work with the Product/Service Health metrics, where you"ll get the most benefit initially. This doesn"t mean that you can"t build metrics to answer specific root questions. But, if you are tasked with developing a metrics program (or are doing the tasking), I"m encouraging you to slowly introduce these concepts and tools into your organization.

Figure 11-1 once again shows the Answer Key, so that we can reference where you"ve been (effectiveness) and where you"re going in this chapter.

Figure 11-1. The Answer Key Revisited.

Dipping Your Toes into the Other Quadrants.

So, when should you work on the other three quadrants? It"s likely that you"ll have opportunities in three areas to test the water in the other quadrants before you embark on full metrics programs in each.

1. As Support for Product/Service Health Efforts.

As discussed earlier, when working within the Service/Product Health (effectiveness) quadrant, you are likely to encounter opportunities to develop and use Process Health (efficiency) measures. An example is when one of your effectiveness indicators is awry and you want to improve the situation. Let"s say your speed to resolve is showing below expectations. After you do your due diligence (investigate), you find out that the time it"s taking to resolve your customers" issues is taking longer than it should, based on your organization"s service-level agreements. You may then take it upon yourself to work with the team responsible for the service. You may want to define the process and then inst.i.tute measures along the way to see if you can improve the process. These measures will more than likely fall under Process Health.

You may recall that I also wrote strongly against delving into efficiency meas- ures before you or your organization is ready. I also warned that management would want to go that route. So, why am I telling you that you might have to go there? Efficiency measures, in this case, will only be used for a specific purpose-to explain the causes of the effectiveness anomalies. This will be a very focused and limited use of efficiency measures. And they will be driven by the effectiveness metrics.

2. To Guide Process Improvement Efforts.

Another early opportunity to delve into the other three quadrants may come about when you are doing a process improvement effort. If you are using any of the currently in-vogue improvement methods (like Six Sigma) you will be asked to develop measures-not only to show that your efforts were successful, but also to help determine where improvement is needed. While these measures can be from the Service/Product Health quadrant, many times they are from a different area of the Answer Key.

And that"s all right.

My admonitions against starting in any of the other three quadrants is based on my reluctance to have you "develop a program" in those quadrants before you"re ready. If you are developing metrics to accompany a specific effort, you can definitely use measures from any quadrant.

© 2024 www.topnovel.cc