Metrics: How to Improve Key Business Results

Chapter 1, rather than have someone count the number of ski machine or stair stepper users, I"d prefer to have some automated means of gathering this data. If each user has to log in information on the machine (weight, age, etc.) to use the programming features, the machine itself may be able to provide user data.

Define the terms-even the ones that are obvious. Here too, clarity is paramount.

We found out that workers A, B, and C made up the team-they did the same type of work in a small unit of the organization. We also learned that the word "team" didn"t mean that the group normally worked together. On the contrary, this "team" worked independently on different tasks. This simple realization gave the manager more ideas.

Workload was defined as the tasks given to the workers by the manager. It excluded many other tasks the workers accomplished for other people in the organization, customers, and each other. The only tasks that counted in this picture were the ones with deadlines and accountability to the management chain.

Productivity was defined as how many tasks were completed on time (or by the deadline).

These definitions are essential to developing the "right" metric. We could have drawn a good picture and designed a metric without these clarifications, but we would have risked measuring the wrong things.

Don"t a.s.sume the terms used in the question are understood.

The metric would also be useful later, when the manager provided training opportunities for the staff. If the training did what was expected, the cups would increase in size-perhaps from a 32-ounce to a 44-ounce super size.

Does this seem strange? Does it seem too simple?

While I can"t argue against things being "strange," very few things are ever "too simple." Einstein once said, "Make it as simple as possible, but no simpler." This is not too simple-like Goldilocks was fond of saying, "it"s just right."

Once we have clear definitions for the terms that make up the root question, we will have a much better picture! Remember the importance of a common language? It is equally important that everyone fully understands the language used to create the root question.

I work with clients to modify these drawings until it provides the full answer to their question. This technique has excellent benefits. By using a picture: It"s easier to avoid jumping to data. This is a common problem. Remember the natural tendency to go directly to data.

It"s easier to think abstractly and avoid being put in a box. Telling someone to "think outside the box" is not always an effective way to get them to do so.

We avoid fears, uncertainty, or doubt about the ability to collect, a.n.a.lyze, and report the necessary information. These common emotions toward metrics restrict your ability to think creatively and thoroughly. They tend to "settle" for less than the ideal answer.

We have a non-threatening tool for capturing the needs. No names, no data or measures. No information that would worry the client. No data at all. Just a picture. Of course this picture may change drastically by the time you finalize the metric. This is essentially a tool for creatively thinking without being restricted by preconceptions of what a metric (or what a particular answer) should be.

One key piece of advice is not to design your metric in isolation. Even if you are your own metric customer, involve others. I am not advocating the use of a consultant. I am advocating the use of someone-anyone-else. You need someone to help you generate ideas and to bounce your ideas off of. You need someone to help you ask "why." You need someone to discuss your picture with (and perhaps to draw it). This is a creative, inquisitive process-and for most of us, it is immensely easier to do this with others. Feel free to use your whole team. But don"t do it alone.

A good root question will make the drawing easier.

Having a complete picture drawn (I don"t mean a Pica.s.so) makes the identification of information, measures, and data not only easy, but ensures you have a good chance of getting the right components.

The picture has to be "complete." After I have something on paper (even if it"s stick figures), I ask the client, "what"s missing?" "Does this fully answer your question?" Chances are, it won"t. When I did the conference seminar, the team members had cups-but they were all the same size and there was no "fill-to-here line." After some discussion and questioning, the group modified the drawing to show the full story.

It"s actually fun to keep modifying the picture, playing with it until you feel it is complete. People involved start thinking about what they want and need instead of what they think is possible. This is the real power of drawing the metric.

Identifying the Information, Measures, and Data Needed.

Only after you have a complete picture do you address the components. This picture is an abstract representation of the answer(s) to our root question. It"s like an artist"s rendition for the design of a cathedral-the kind used in marketing the idea to financial backers. When you present the idea to potential donors, you don"t need to provide them with blueprints, you need to pitch the concept.

Next are the specific design elements to ensure the building will be feasible. As the architect, you can provide the artist"s conception and do so while knowing from experience whether the concept is sound. Your next step is to determine what will go into the specific design-the types of structures, wiring, plumbing, and load-bearing walls. Then you will have to determine the materials you need to make it a reality-what do we need to fill in the metric?

Let"s look at the workload example. How do we divide our team"s workload to be the most productive? Remember, the picture is of drink cups-various sizes from 20-ounce, to 32-ounce, to a super-sized 44-ounce. Each cup has a mark that designates the "fill" level-and if we fill above this line, the froth will overflow the cup. Using the picture, we need to determine the following: How do we measure our team"s level of productivity?

How do we currently allocate (divide) the work?

What are other ways to allocate the work?

Of the three pieces of information listed, only the first seems to need measures. The other two are process definitions. Since our question is driven by a goal (to improve the team"s productivity), the process for designing the metric will produce other useful elements toward the goal"s achievement.

Information can be made up of other information, measures, and data. It isn"t important to delineate each component-what"s important is to work from the complex to the simple without rushing. Don"t jump to the data!

An example of how you can move from a question to measures and then to data follows.

How do we measure our team"s level of productivity? How much can each worker do? Worker A?

Worker B?

Worker C?

How much does each worker do? By worker (same breakout as the previous measure) How much does each worker have in his or her cup? By worker How long does it take to perform a task? By worker By type of task.

By task.

I logged a sub-bullet for each worker to stress what seems to be anti-intuitive to many people-most times there is no "standard" for everyone. When developing measures, I find it fascinating how many clients want to set a number that they think will work for everyone.

Machinery, even manufactured to painstakingly precise standards, doesn"t function identically. Why do we think that humans-the most complex living organism known, and with beautiful variety-would fit a standardized behavior pattern?

Of course it would be easier if we had a standard-as in the amount of work that can be done by a programmer and the amount of work each programmer does. But this is unlikely.

You may also be curious why we have the first and second measures-how much a worker can do and how much he accomplishes. But since the goal is to increase the productivity of the team, the answer may not be in reallocating the load-it may be finding ways to get people to work to their potential. A simpler reason is that we don"t know if each worker is being given as much as they can do-or too much.

Looking back at Figure 2-1, we may need to decide if the flavor of drink matters. Do we need to know the type of work each worker has to do? Does the complexity of the work matter? Does the customer matter? Does the purpose of the work matter? Does the quality of the work matter? Should we only be measuring around the manager"s a.s.signed work? If we exclude other work, do we run the risk of improving productivity in one area at the cost of others?

These questions are being asked at the right time-compared with if we started with the data. If we started with a vague idea (instead of a root question) and jumped to the data-we"d be asking these questions after collecting reams of data, perhaps a.n.a.lyzing them and creating charts and graphs. Only when we showed the fruits of our labor to the client would we find out if we were on the right track.

I want to help you avoid wasting your time and resources. I want to convince you to build your metrics from a position of knowledge.

Collecting Measures and Data.

Now that we"ve identified the information needed (and measures that make up that information) we need to collect the data. This is a lot easier with the question, metric picture (answer), and information already designed. The trick here is not to leave out details.

It"s easy to skip over things or leave parts out because we a.s.sume it"s obvious. Building on the workload example, let"s look at some of the data we"d identify.

First we"ll need task breakdowns so we know what the "work" entails. What comprises the tasks-so we can measure what tasks each worker "can" do. With this breakdown, we also need cla.s.sifications for the types of tasks/work. When trying to explain concepts, I find it helpful to use concrete examples. The more abstract the concept, the more concrete the example should be.

Task 1: Provide second-level support.

Task 1a: a.n.a.lyze issue for cause.

Task 1b: Determine solution set.

Task 1c: Select best solution.

This example would be categorized as "support." Other categories of a task may include innovation, process improvement, project development, or maintenance.

We"ll also need a measure of how long it takes each worker to perform each task, as seen in Table 2-1.

If we have measures for the work components, we should be able to roll this data "up" to determine how long it takes to do larger units of work.

Next, we"ll need measures of what is a.s.signed currently to each worker.

Worker A is working on support while workers B and C are working on maintenance.

Since we need to know what each worker is capable of ("can do") we will need to know the skill set of each worker. With specific identification of what they "can"t" do. Many times we find the measure of X can be determined in part (or fully) by the measure of the inverse value /x.

Worker A is not capable of doing maintenance work. That"s why he isn"t a.s.signed to maintenance and does the support-level work instead.

Again, it"s a lot easier once we work from the top down. Depending on the answers we would perform investigations to ensure the a.s.sumptions we come to are correct. Then we can make changes (improvements) based on these results.

Worker A wants to do more maintenance-type tasks, but doesn"t feel confident in her abilities to do so. The manager chose to develop a comprehensive training program for Worker A.

Workers B and C showed they had the skills necessary to provide support, and were willing to do so. The manager divided the support work more evenly between the team.

These types of adjustments (and new solutions) could be made throughout, depending on the answers derived from the metrics.

It was not necessary to be "perfect" in the identification of all measures and data. If you are missing something, that should become evident when trying to build the information and, finally, the metric. If you"re missing something, it will stand out. If you have data or measures that you don"t need, this, too, will become quickly evident when you put it all together for the metric.

You"re not trying to be perfect out of the gate, but you definitely want to be as effective as possible. You"d like to be proactive and work from a strong plan. This happens when you use the root question and metric as your starting point.

It is truly amazing to see how a picture-not charts and graphs, but a creative drawing depicting the answer-works. It helps focus your efforts and keeps you from chasing data.

The metric "picture" provides focus, direction, and helps us avoid chasing data.

How to Collect Data.

Once we"ve designed what the metric will look like, and have an idea of what information, measures, and data we"ll need to fill it out, we need to discuss how to gather the needed parts. I"m not going to give you definitive steps as much as provide guidelines for collecting data. These "rules of thumb" will help you gather the data in as accurate a manner as possible.

Later, we"ll expand on some of the factors that make the accuracy of the data uncertain. This is less a result of the mechanisms used and more a consequence of the amount of trust that the data providers have with you and management.

Use Automated Data When Possible.

When I see a "Keep Out, No Trespa.s.sing" sign, I think of metrics. A no-trespa.s.sing sign is designed to keep people out of places that they don"t belong. Many times it"s related to safety. In the case of collecting data, you want to keep people out.

Why? The less human interaction with the data, the better. The less interaction, the more accurate the data will be, and the higher level of confidence everyone can have in its accuracy. Whenever I can collect the data through automated means, I do so. For example, to go back to the example in Chapter 1, rather than have someone count the number of ski machine or stair stepper users, I"d prefer to have some automated means of gathering this data. If each user has to log in information on the machine (weight, age, etc.) to use the programming features, the machine itself may be able to provide user data.

The biggest risk with using automated data may be the abundance and variety. If you find the exercise machines can provide the data you are looking for (because you worked from the question to the metric, down to the information and finally measures/data), great! But normally you also find a lot of other data not related to the metric. Any automated system that provides your data will invariably also provide a lot of data you aren"t looking for.

For example, you"ll have data on the demographics I already listed (age and weight). You"ll also have data on the length of time users are on the machines, as well as the exercise program(s) selected; the users" average speed; and the total "distance" covered in the workout. The machine may also give information on average pulse rate. But, if none of this data serves the purpose of answering your root question, none of it is useful.

In our workload example, it will be difficult to gather data about the work without having human interaction. Most work accounting systems are heavily dependent on the workers capturing their effort, by task and category of work.

Beware!

So what happens when your client finds out about all of this untapped data?

He"ll want to find a use for it! It"s human nature to want to get your money"s worth. And since you are already providing a metric, the client may also want you to find a place for some of this "interesting" data in the metric you"re building. This risk is manageable and may be worth the benefit of having highly accurate data.

The risk of using automated data is that management will want to use data that has no relation to your root question, just because this extra data is available.

You should also be careful of over-trusting automated data. Sometimes the data only seems to be devoid of human intervention. What if the client wants to use the weight and age data collected in the ski machine? Well, the weight may have been taken by the machine and be devoid of human interaction (besides humans standing on the machine), but age is human-provided data, since the user of the machine has to input this data.

Employ Software and Hardware.

Collecting data using software or hardware are the most common forms of automated data collection. I don"t necessarily mean software or hardware developed for the purpose of collecting data (like a vehicle traffic counter). I mean something more like the ski-machine, equipment designed to provide a service with the added benefit of providing data on the system. Data collected automatically provides a higher level of accuracy, but runs the risk of offering too much data to choose from. Much of the data I use on a daily basis comes from software and hardware-including data on usage and speed.

Conduct Surveys.

© 2024 www.topnovel.cc