The first clue that the metrics designer has to dig deeper for the real root question is that the answer to the given question could be yes or no.

Metrics Designer: "What do you mean by responsive?"

Director: "Are we answering calls in a timely manner?"

Metrics Designer: "What exactly is a "timely manner"? Do you mean how many seconds it takes or how many times the phone rings?"

Director: "I guess within the first three rings?"

The designer made a note-that the director guessed calls should be answered within three rings, but that end users had to provide the actual answer.

Metrics Designer: "Okay. Why do we need to know this? What"s driving the curiosity?"

Director: "I"ve had some complaints that we aren"t picking up the phone quickly enough. Customers say they can"t reach us or they have to wait too long to get to speak to a person."

Metrics Designer: "What const.i.tutes "not quick enough"? Or, what is a wait that is "too long"?"

Director: "I don"t know."

The designer made some more notes. The director was willing to admit that he didn"t know what the customer meant by "quick enough" and that he didn"t know what would please the customer. This admission was helpful.

Metrics Designer: "I suggest we ask some of your customers to determine these parameters. This will help us determine expectations and acceptable ranges. But we also need to know what you want to know. Why do you want to know how responsive the service desk is?"

Director: "Well, the service desk is the face of our organization-when most customers say "Emerald City Services" they think of the service desk."

Metrics Designer: "So, what do you want to know about the face of your organization?"

Director: "How well it"s received by our customers. I want to know if it"s putting forth a good image."

This is a much better starting point for our metric design. With the root question (How well is the service desk representing Emerald Services?) we can decide on a more meaningful picture-a picture that encompa.s.ses everything that goes into answering the question.

There are other possible results of our inquisition. Of course, we shouldn"t think of it as an interrogation. That would not only give the director the wrong impression and it would also lead us to ask the wrong questions. No, our job is to reach the root question. We have to help our clients determine what their real underlying needs are and what they need or want to know. One tool for doing so is the Five Whys.

The Five Whys is simple in its concept. You ask "why" five times, until the client can no longer answer with a deeper need. Of course, you can"t ask "why" repeatedly like a child being told they can"t play in the rain. You have to ask it in a mature manner. Many times you don"t actually use the word, "why." As in the earlier example, sometimes you ask using other terms-like "what" and "how" and "what if?"

The process isn"t so predicated on the use of the word "why" as it is grounded in attempting to reach the root purpose or need. Perhaps the worst error is to jump happily at the first "why" in which you feel some confidence that you could answer. We are all problem solvers by nature, and the possibility of latching onto a question with which we can easily provide an answer is very tempting.

Let"s look at another example, this one ill.u.s.trating how the Five Whys can help us get to the root question.

Director: "I"d like to know if our service desk is responsive to our customers."

Metrics Designer: "What do you mean by responsive?"

Director: "Are we answering calls in a timely manner?"

Metrics Designer: "Why do we need to know this? What"s driving the curiosity?"

Director: "My boss is demanding metrics, and I understand from the service desk that this information is readily available from our Trouble Call System."

Metrics Designer: "Yes, it is. But, we might give the wrong impression of what you think is important if we choose this metric. Perhaps we can see if something meaningful to you could also be provided without additional work to the staff. If we can identify different measures that are easy to collect and give you a better picture of the department, would you be willing to use them instead?"

Director: "Sure."

Metrics Designer: "Excellent. Usually we form metrics around goals you are trying to achieve, processes you want to improve, or problems you"re trying to solve."

Director: "But leadership wants service desk metrics."

Metrics Designer: "Okay. Why do they want service desk metrics?"

Director: "I think they"re asking managers to demonstrate progress on their strategic plans."

Metrics Designer: "Do you have a strategic plan for the service desk?"

Director: "Yes, of course."

Metrics Designer: "So, perhaps we should look at the goals within the plan..."

To get to a root question, ask "why" five times-digging until you reach a root need or question.

I was able to narrow the need down to metrics around a set of goals. Not a perfect root question, but much better than what we started with.

The problem is that you may not even be close to a root question. You may be driven by decrees from above, like the director in the scenario. You may be filling a box on a checklist.

When I run into this problem, I seriously consider walking away. I just let them know that although they believe they want a metric, I don"t believe they actually need one. If they want to look at a couple of measures or provide some data points to someone asking for them-I or someone else can provide them, but again, they don"t need a full-blown metric. If I"m consulting (and not an employee at the organization who feels he must obey), I run away.

Most root questions come from goals, improvement opportunities, or problems you want to solve.

If you don"t have a list of goals for your unit, you can add value by first developing them. This may seem to be outside of the process for designing metrics, but since you must have a good root question to move forward, you don"t have much choice.

If you have a set of goals, then your task becomes much easier. But be careful; the existence of a doc.u.mented strategic plan does not mean you have usable goals. Unless you have a living strategic plan, one that you are actually following, the strategic plan you have is probably more of an ornament for your shelf than a usable plan. But, let"s start with the a.s.sumption that you"ll have to identify your goals, improvement opportunities, and/or problems.

The best way I"ve found to get to the root questions when starting from a blank slate is to hold a working session with a trained facilitator, your team, and yourself. This shouldn"t take longer than two or three one-hour sessions.

To start, I usually break out the large Post-it pads and markers and we brainstorm goals for the unit. I"ve also done this one-on-one with managers or with their teams. Even when I start with the manager, by the second meeting we end up pulling in key team members. If I"m one-on-one with a client, I have to encourage them, coach them, and keep them motivated since there is no one else for them to feed off of.

When I"m working with a team, I have to focus them early. I work hard to keep them on target, avoiding trips down any rabbit holes. This requires that the team be fully present. No phones, laptops, or side conversations. This may seem a bit "controlling," but they thank me later-and I get the job done much faster.

You may need to elicit the root need through one or more facilitated sessions.

The itinerary runs as follows: Five minutes of brainstorming. Five minutes is more than enough if I keep everyone focused and avoid any discussion, critiques, or explanations of what I capture on paper. This is of course harder in a group, but since the facilitator is in charge-she can force the team to truly brainstorm. I find it helpful to remind everyone they"ll have time to explain, modify, delete, add, and/or critique later.

Five to ten minutes of clarifications. Again, you need a facilitator. I end up doing both roles-facilitator and metric designer, and I like it that way. But if you"re doing this on your own, you"ll probably need to enlist a facilitator to help you. Once the team runs out of inputs-it could be in less than five minutes or a little more-I hit each item and ensure the meaning is clear. I allow the client(s) to delete items that they feel are "wrong," add more, and/or modify them for clarity. I don"t allow them to delete or change anything because they don"t think it"s possible to achieve (goals) or to measure (root questions).

Five minutes grouping or categorizing the results. This step is optional. If you have identified more than ten "goals," then there may be a benefit in organizing the items around themes. Again, a trained facilitator will be able to think on her feet here. Sometimes I simply ask the team if they see any logical groupings, rather than try to find them myself. Five minutes cla.s.sifying each item as a Goal/Objective, Task, Motto/Slogan, or Measure of Success. Although the purpose was to brainstorm goals-one of the benefits of brainstorming is the identification of other related items. Goals and objectives are "achievable" items on the list you"ve captured. It is very unlikely that you"ll have strategic (long range) goals, but you may find that some are objectives needed for achieving others. I group them together and identify their cla.s.sification. You"ll also find some tasks. These should be rather obvious as things that the team or you want to do. They may be process steps or just material for a "to do" list. If they fall under a goal or objective, group them accordingly. If the tasks don"t have a goal, I help them determine if there is a missing goal or if they are simply job tasks. This is not necessary for identifying root questions, but having a well-developed set of goals is useful. Some of the items will be measures of success (MoS). Basically, they are "how" you know you"ve succeeded at achieving the goal or completing the task. These measures may satisfy your desire for metrics-but they won"t be "metrics." More on these later. Just remember that the root question may not need a metric as the answer-sometimes MoS are enough.

Five minutes per goal. So far you"ve invested about a half hour on identifying goals (and other items)-the rest of the effort may have to span across more than one meeting. Once we have goals identified, we"ll be able to identify MoS for each (some may have already been identified). These will be new items. If all you need is to track progress to the goal or to know when you"ve succeeded at achieving the goal-you can stop. You can also skip down to "how to create and collect measures" instead of developing a metric. If you have larger questions (or larger goals) a metric may be appropriate. Remember, metrics tell a complete story, using information, measures, and data. So, if you have simple, tactical goals, the need may be for measures rather than a metric. If the goal is strategic and "large" the questions will also be bigger, and likely lead to a metric. Many times the goal will generate questions, other than if the team will achieve the goal. How to achieve the goal is one possibility. Another is to determine the "why" for the goal-the reason for it. Many times these also lead to a metric.

Remember, the result of getting to the root need may not be a metric.

We"ve discussed using five "whys" to get to a root need and using or developing a strategic plan (simplified version) for getting to the root need. However, any method for eliciting requirements should work. The important thing is to get to the underlying need. The root question should address what needs to be achieved, improved, or resolved.

What is important is to remember that you can work from wherever you start back to a root question and then forward again to the metric (if necessary). I told a colleague that I wanted to write a book on metrics.

"Why? Don"t you have enough to do?" She knew I was perpetually busy.

"Yes, but every time I turn around, I run into people who need help with metrics," I answered.

"Why a book?" She was good at The Five Whys.

"It gives me a tool to help teach others. I can tell them to read the book and I"ll be able to reference it."

"So, you want to help others with designing metrics. That"s admirable. What else will you need to do?"

And that started me on a brainstorming journey. I captured ideas from presenting speeches, teaching seminars and webinars, to writing articles and proposing curriculum for colleges. I ended up with a larger list of things to accomplish than just a book. I also got to the root need and that helped me focus on why I wanted to write the book. That helps a lot when I feel a little burned out or exhausted. It helps me to persevere when things aren"t going smoothly. It helps me think about measuring success, not based on finishing the book (although that"s a sub-measure I plan on celebrating) but on the overall goal of helping others develop solid and useful metrics programs.

Once you think you have the root question, chances are you"ll need to edit it a little. I"m not suggesting that you spend hours making it "sound" right. It"s not going to be framed and put over the entrance. No, I mean that you have to edit it for clarity. It has to be exact. The meaning has to be clear. As you"ll see shortly, you"ll test the question to ensure it is a root-but beforehand it will help immensely if you"ve defined every component of the question to ensure clarity.

Define the terms-even the ones that are obvious. Clarity is paramount.

Keep in mind, most root questions are very short, so it shouldn"t take too much effort to clearly define each word in the question.

As with many things, an example may be simpler. Based on the conversation on why I wanted to write this book, let"s a.s.sume a possible root question is: How effective is this book at helping readers design metrics? You can ensure clarity by defining the words in the question.

How effective is this book at helping readers design metrics? What do we mean by effective? In this case, since it"s my goal, I"ll do the definitions. How well does it work? Does it really help readers?

What do I mean by "how effective?" The how portion means which parts of the book are helpful? Which parts aren"t? Also, does it enable someone to develop high-quality metrics? After all, my goal is to make this book a practical tool and guide for developing a metrics program.

How effective is this book at helping readers design metrics? This may seem obvious, and in this case it is. But, you should still check. There may be a greater need for a definition if I had instead asked, "How effective is my system for designing metrics?"

Even obvious definitions, like this one-may lead you to modify the question. If asked, "What do you mean by this book?" I might very well answer, "Oh, actually I want to know if the system is effective, of which the book is the vehicle for sharing." This would lead us to realize that I really wanted to know if my system worked for others-more so than if this form of communication was effective.

How effective is this book at helping readers design metrics? Does it help? I have to define if "help" means Can the reader develop metrics after reading it?

Is the reader better at developing metrics after reading it?

Does the reader avoid the mistakes I preach against?

Readers are another obvious component-but we could do some more clarification. Does "reader" mean someone who reads the "whole" book or someone who reads any part of the book?

Is the reader based on the target audience?

How effective is this book at helping readers design metrics? What do I mean by "design"? As you have read, for me designing a metric involves a lot more than the final metric. It includes identifying the root need and then ensuring a metric is the proper way to answer it. So, while "design" may mean development, it has to be taken in the context of the definition of a metric.

What do I mean by metric? Do I mean the metric part of the equation or does it include the whole thing-root question, metric, information, measures, and data? If you"d read the book already, you"d know the answer to this question. The metric cannot be done properly without the root question, and is made up of information, measures, data, and other metrics. Even with that-what I mean in the root question may be a little different than this because the outcome of following the process may be to not create a metric. In that case, using the root question to provide an answer would be a success-although no metric was designed.

Based on this exercise, if I chose to keep the root question the same, I"d now know much better how to draw the picture. Chances are though, after a.n.a.lyzing each word in the question, I would rewrite the question. The purpose behind my question was to determine if the book was successful. And since success could result in not designing metrics, I would rewrite my question to be more in tune with what I actually deem success-the effective use of my system. The new root question might be: How effective is my system in helping people who want or need to design metrics?

Testing the Root Question.

If you think you"ve got the root question identified, you"re ready to proceed. Of course, it may be worthwhile to test the question to see if you"ve actually succeeded.

Test 1. Is the "root" question actually asking for information, measures, or data? "I"d like to know the availability of system X." This request begs us to ask, "Why?" There is an underlying need or requirement behind this seemingly straightforward question. When you dig deep enough, you"ll get to the real need, which is simply a request for data. The root question should not be a direct request for data. The following are examples of requests for data: Do we have enough gas to reach our destination? Is the system reliable or do we need a backup? How long will it take to complete the project?

Test 2. Is the answer to the question going to be simple? Is it going to be a measure? Data? If the answers are either "yes" or "no," chances are you"re not there yet or the question doesn"t require a metric to provide an answer. It may seem too easy-that you wouldn"t get questions after all this work that could be answered with a yes or no. But, it happens. It may mean only a little rework on the question, but that rework is still necessary. Is our new mobile app going to be a best seller? Should we outsource our IT department? Are our employees satisfied? These may seem like good root questions, but, they can all be answered with a simple yes or no.

Test 3. How will the answer be used? If you"ve identified a valid root question, you will have strong feelings, or a clear idea of how you will use the answer. The answer should provide discernable benefits. Let"s take my question about the effectiveness of this book at helping readers develop metrics. If I learn that it"s highly effective at helping readers, what will I do? I may use the information to gain opportunities for speaking engagements based on the book. I may submit the book to be considered for a literary award. I may have to hold a celebration. If the answer is that the book is ineffective, then I may investigate possible means of correcting the situation. I may have to offer handbooks/guidelines on how to use the book. I may have to offer more information via a web site. If the feedback is more neutral, I may look at ways to improve in a later edition.

The key is to have predefined expectations of what you will do with the answers you"ll receive. When I ask a client how they"ll use the answer, if I get a confused stare or their eyes gloss over, I know we"re not there yet.

Test 4. Who will the answer be shared with? Who will see the metric? If the answer is only upper management, then chances are good that you need to go back to the drawing board. If you"ve reached the root question, many more people should benefit from seeing the answer. One key recipient of the answer should be the team that helped you develop it. If it"s only going to be used to appease upper management-chances are you haven"t gotten to the root or the answer won"t require a metric.

Test 5. Can you draw a picture using it? When you design the metric, you will do it much more as an art than a science. There are lots of courses you can take on statistical a.n.a.lysis. You can perform exciting and fun a.n.a.lysis using complex mathematical tools. But, I"m not covering that here. We"re talking about how to develop a useable metrics program-a tool for improvement. If you can"t draw a picture as the answer for the question, it may not be a root question.

Not all root questions will pa.s.s these tests.

I"m not saying that all root questions must pa.s.s these tests. But, all root questions that require a "metric" to answer them must. If your question doesn"t pa.s.s these tests, you have some choices.

Develop the answer without using data, measures, information, or metrics. Sometimes the answer is a process change. Sometimes the answer is to stop doing something, do it differently, or start doing something new. It doesn"t have to result in measuring at all.

Develop the answer using measures (or even just data). This may be a one-time measure. You may not need to collect or report the data more than once.

Work on the question until it pa.s.ses the five tests-so you can then develop a metric. Why would you want to rework your question simply to get to a metric? You shouldn"t. If you feel confident about the result, stop. If the client says you"ve hit upon the root question, stop. If the question resonates fully, stop. Wherever you are, that"s where you"ll be. Work from there. Don"t force a metric if it"s not required.

Your task is not to develop a metric-it"s to determine the root question and provide an answer.

Developing a Metric.

It"s an interesting argument: is the process of designing metrics a science or an art? If you read statistics textbooks, you might take the side of science. If you read Transforming Performance Measurement: Rethinking the Way We Measure and Drive Organizational Success by Dean Spitzer (AMACON, 2007), or How To Measure Anything by Douglas Hubbard (Wiley, 2010), you might argue that it"s an art. I propose, like most things in real life (vs. theory), it"s a mixture of both.

One place it"s more art than science is in the design of a metric. I can say this without reservation because to design our metric, you want to actually draw a picture. It"s not fine art. It"s more like the party game where you"re given a word or phrase and you have to draw a picture so your teammates are able to guess what the clue is.

At the first seminar I taught on designing metrics, "Do-It-Yourself Metrics", I broke the students into groups of four or five. After stepping through the exercise for identifying root questions, I told them to draw a picture to provide an answer to a question. The question was, "How do we divide our team"s workload to be the most productive?" Figure 2-1 shows the best of the students" answers.

Figure 2-1. Workload division metric This picture shows how each person (represented by a different cup) has different levels of work. The level of the liquid represents the amount of work "in each person"s cup." The line near the top is the highest level the liquid should be poured to, because the froth will cause it to overflow. This line represents the most each person can actually handle, leaving room at the top for the "extra"-like illness, lunch, vacation, etc. By looking at the picture, the manager gets an easy-to-understand story of who has too much work, who can take on more, who is more productive, and who needs to improve their skill sets so that they can eventually have a larger cup.

A useful part of drawing the picture was clarity around the question. To ensure that we drew it right, we needed to also define the terms we were using in the picture: productivity, workload, and team.

© 2024 www.topnovel.cc