So your innocent and proper Time to Resolve measure, if unexplained, could create morale problems due to fear, uncertainty, and doubt.

If fear is born of ignorance, then asking for data without sharing the purpose makes you the midwife.

The boss says: "I only want the data so I know what"s going on."

The worker hears: "I want data so I can see where you"re s.c.r.e.w.i.n.g up."

The boss says: "I need the data so I can make better decisions."

The worker hears: "I need data so I can decide who to fire, cut work hours from, or penalize."

The boss asks: "What"s the problem with reporting on your performance? I should know about it, shouldn"t I?"

The worker wonders: "Yes, so why don"t you know about it? Why ask me for data about it? Why not just ask me? Why not spend some time with me?"

If you have spent any considerable amount of time in an organization with multiple layers of management, the preceding conversation may sound extremely familiar. Unfortunately, this conversation is not a unique one. I believe it (or something close to it) occurs in every workplace, every work day of the year.

We need to combat this common problem. To create a useful metric, you have to know, in advance of collecting the data, how the results (answers) will be used. It is essential for designing the metric properly and identifying the correct information, measures, and data. It is also essential if you want accurate data wherever human-provisioning is involved.

There is a simple enough test you can perform. Ask your friend, relative, coworker, or boss a simple, personal question. Better yet, ask your significant other a simple question like, "How much do you weigh?" Don"t do it yet. First read about the expected outcomes-before you get injured.

When you ask someone a personal question without offering any explanation as to the reason, one of the following five reactions will likely occur: You are physically accosted. At least that"s what happened to me in the case when I asked my wife the same question.

You get data that accurately reflects what the data provider believes you want to hear.

You get data that accurately reflects what the data provider wants you to hear.

You get accurate data.

You get a question in return: "Why do you want to know?"

Most likely people will in turn ask you, "Why do you want to know?" It"s a natural reaction. Along with the expected response, you may also notice respondents becoming defensive. Watch their body language. They may not provide your answers, but if they do, they probably won"t do so happily or willingly.

Part two of this experiment would be for you to provide a seemingly valid purpose for your question, which would not be, "Oh, I"m just curious," or "I just want to gain some insight into your weight." Though, many times when we ask management why they want certain data, we get an equally ambiguous answer: "Oh, I just want to know what"s going on," or "I just need insight into how we"re doing." A lack of clarity will create inaccuracy in the data. In software programming, we use the term GIGO-garbage in, garbage out. This is true for dealing with emotionally-charged discussions with others. Don"t be fooled. Asking for data will create emotional tension. If you provide garbage reasons for asking a question, you"ll get garbage answers.

It won"t help to simply a.s.sure the people that you"re interviewing that ev-eryone is being asked the same questions. But, for example, if you first tell them that you"re doing a study for school and you need to gather the weight of ten random people, they may answer your question.

Instead of an immediate answer, you may get one or more of the following responses: "Will you use my name?"

"Who will see it?

"What are you going to do with it?"

All of these questions are trying to get at the same thing: how are you going to use the answer that I give you? Until you answer this question to the satisfaction of the respondent, any answer you receive will be highly suspect.

Even if you explain fully how you will use it, depending on how the respondent feels personally about the information, the accuracy will be suspect. This may seem logical to you. I"ve heard the arguments before. You may argue that you aren"t looking for personal information about your workers, you"re trying to develop metrics around staffing so you can justify another position!

Back to our experiment. Ask someone, "How much do you weigh?"

If by some small chance you get an answer to this question, write it down. Also write down the level of confidence you have in the accuracy of the answer. Are you 90 percent confident that it"s accurate?

If you didn"t get an answer at first, then explain the purpose of your question (for example, that you are collecting data for a cla.s.s project). If you get an answer, write it down. What"s your confidence in its accuracy now?

As a third attempt, share how you intend to use the answer. It doesn"t really matter what you say. You can go with, "I"ll aggregate the data and show the average weight of the ten people I ask. Then we"ll discuss if that weight is a healthy norm for people." See if you get an answer yet.

Don"t forget to observe body language the whole time. See if the respondent is less defensive as you provide more information. Check your level of confidence in the answer.

The explanation of how the metric and its components will be used should be doc.u.mented in the metric development plan. Don"t get hung up on the need to doc.u.ment everything or to make it pretty. The value of the development plan lies in the clarity of its purpose. Of course, plans also provide consistency, prolonged guidance, and direction for others who may use it. By creating the development plan, you will have thoroughly thought out the metric and will be able to communicate its worth to any who ask.

We"ve seen that the questions we ask can result in a lack of answers or inaccurate answers unless we clearly define our intentions for the information gathered. Another key to getting better answers (or one at all if the respondent is still reluctant), is to communicate how the results won"t be used.

Explain How Information Won"t Be Used.

When you"re questioning a person about his weight, tell him how you won"t use the information. Sincerely a.s.sure him that no matter what, you will not use his information in a way that he"d find offensive. This could include using it in a published study or as a case study for a cla.s.s. You may not know what his particular fears are. By explaining how you won"t use the information, you provide an opportunity for him to share what he would find inappropriate (threatening or fearful). If you want accurate data, you have to be able to a.s.sure the people involved in providing the data how you will and how you won"t use the data. And how you won"t use it may be more important to the person providing the data to you than how you will use it.

This promise takes some diligence on your part. You have to remember your promises-which can be tricky when you get a very innocent request for information. You have to make sure it"s within the agreement you made with the data provider.

The most common and simple agreement I make is to not provide data to others without the source"s permission. If you provide the data, it"s your data. You should get to decide who sees it.

This should be captured in the development plan under "How it won"t be used." Probably the major reason to put the plan in writing is that doc.u.mented agreements allow you to keep your word with less difficulty.

Defining how the metrics won"t be used helps prevent fear, uncertainty, and doubt.

Identify Who Will Want to Use the Metrics.

While you may believe you are the only customer who wants to view or use the metric, chances are there are many customers of your metrics. A simple test is to list all of the people you plan on sharing your information with. This list will probably include your boss, your workers, and those who use your service.

Anyone who will use your metric is a customer of it. You should only show it to customers.

Everyone that you plan to share your metrics with becomes a customer of that metric. If they are not customers, then there is no reason to share the information with them. You may be a proponent of openness and want to post your metrics on a public web site, but the information doesn"t belong to you. It is the property of those providing the data. It is not public information; it was designed to help the organization answer a root question. Why share it with the world? And today we know that if you share it publicly, it"s in the public domain forever.

The provider of the data should be the primary metric customer.

The possible customers are as follows: Those who provide the data that goes into creating the metric.

Those who you choose to share your metrics with.

Those who ask you for the metric and can clearly explain how they will use the metric.

It is important to clearly identify the customers of your metrics because they will have a say in how you present the metric, its validity, and how it will be used. If you are to keep to the promise of how it will and won"t be used, you have to know who will use it and who won"t.

These customers should be doc.u.mented in your development plan, with a note on the type of customer they are. Are they providing the data? Are they the front-line supervisors? Are they executive management? The type of customer will help define what level of information they receive and the communications necessary around the metric"s use.

Who will and won"t use the metric is as important as how it will and won"t be used.

Before I go to the next component, let"s take a sanity check. I know quite well that sometimes you can"t tell your boss "no." I know that your boss may be demanding data, measures, or information and may not be sympathetic to your need for a.s.surances that he won"t misuse or abuse the information you provide. If I believe the accounts of numerous authors, coworkers, and friends-bad managers far outnumber good ones.

I am not advocating that you fall on your sword over an innocuous information request. I am promoting that, depending on your position in the organization, that if you are put in the unenviable position as a middle person between a "bad" manager and the workers-do everything you can within your power to ensure that the information you provide is not misused.

Schedule for Reporting, a.n.a.lyzing, and Collecting.

The gathering of the data, measures, and information you will use to build the metric requires a plan. Figure 3-3 shows the timing for developing this part of the plan.

Figure 3-3. Schedules.

Most metrics are time-based. You"ll be looking at annual, monthly, or weekly reports of most metrics. Some are event-driven and require that you report them periodically. As part of your metric development plan, you will have to schedule at least three of the following facets of your metric: Schedule for reporting. Look at the schedule from the end backward to the beginning. Start with what you need. Take into account the customers that you"ve identified to help determine when you will need the metrics. Based on how the metrics will be used will also determine when you"ll need to report it.

Schedule for a.n.a.lysis. Based on the need, you can work backward to determine when you"ll need to a.n.a.lyze the information to finalize the metric. This is the simplest part of the scheduling trifecta, since it is purely dependent upon how long it will take you to get the job done. Of course, the other variable is the amount of data and the complexity of the a.n.a.lysis. But, ultimately, you"ll schedule the a.n.a.lysis far enough in advance to get it done and review your results. I highly recommend you have at least one other pair of eyes review your a.n.a.lysis. Depending on the complexity of your data, you may need a quality check of the raw data used in the a.n.a.lysis also.

Schedule for collection. When will you collect the data? Based on when you will have to report the data, determine when you will need to a.n.a.lyze it. Then, based on that, figure out when you will need to collect it. Often, the schedule for collecting the data will be dependent on how you collect it. If it"s automated, you may be able to gather it whenever you want. If it"s dependent on human input, you may have to wait for periodic updates. If your data is survey-based, you"ll have to wait until you administer the surveys and the additional time for people to complete them.

Since you started at the end, you know when you need the data and can work backwards to the date that you need to have the data in hand. Depending on the collection method you"ve chosen, you can plan out when you need to start the collection process and schedule accordingly.

Nothing new here-if you want to achieve success, you must plan to succeed. Don"t do it when you "get the chance." Plan it. Schedule it. If it"s worth doing, it"s worth planning to do it right.

And if it"s worth doing right, it"s worth making sure you can do it right more than once. But remember, the development plan isn"t just about repeatability, it"s about getting it right the first time by forcing yourself to think it all out.

a.n.a.lysis.

Doc.u.menting a.n.a.lysis happens when you think it does...during the a.n.a.lysis phase.

Figure 3-4. a.n.a.lysis.

This may be the most obvious section of the metric development plan so far. After data collecting, the next thing most people think of when I mention metrics is a.n.a.lysis. All of the statistics cla.s.ses I"ve taken lead to the same end: how to a.n.a.lyze the data you"ve carefully gathered. This a.n.a.lysis doc.u.mentation in the plan must include all metric data rules, edits, formulas, and algorithms; each should be clearly spelled out for future reference.

What may be in contention is the infallibility of the a.n.a.lysis tools. There are those that believe if you have accurate data (a few don"t even care if it"s accurate), you can predict, explain, or improve anything through statistical a.n.a.lysis. I"m not of that camp.

I have great respect for the benefits of a.n.a.lysis and, of course, I rely on it to determine the answers my metrics provide. For me, the design of the metric-from the root question, to the abstract picture, to the complete story-is more important than the a.n.a.lysis of the data. That may seem odd. If we fail to a.n.a.lyze properly, we will probably end up with the wrong conclusions and, thereby, the wrong answers. But, if we haven"t designed the metric properly to begin with, we"ll have no chance of the right answers-regardless of the quality of our a.n.a.lysis.

And if we have a good foundation (the right components), we should end up with a useful metric. If the a.n.a.lysis is off center, chances are we"ll notice this in the reporting and review of the metric.

Without a strong foundation, the quality of the a.n.a.lysis is irrelevant.

While the a.n.a.lysis is secondary to the foundation, it is important to capture your a.n.a.lysis. The a.n.a.lysis techniques (formulas and processes) are the second-most volatile part of the metric (the first is the graphical representation). When the metric is reported and used, I expect it to be changed. If I"ve laid a strong foundation through my design, the final product will still need to be tweaked.

Consider it part of the negotiations with management. If you"ve worked with the leadership to determine the root questions-and you managed not to wear out your welcome-when you deliver your metric, you are fulfilling your part of the bargain. You are going to give the leader what she wants-useful answers to help inform her decisions. Although the question is hers, she may have the need to tweak the answer. It is rare that a manager accepts a metric as is. They almost always feel the need to modify the final picture. Whether this is because of their bigger-picture view of the organization or simply the need to feel that they contributed, I don"t know and it doesn"t matter. You should be open to recommendations for changing the graphical representation.

If the graphical presentation changes, you may very well have to also tweak the a.n.a.lysis that fed the metric. If the leader wants to see the percentage of customers who were satisfied instead of the average customer satisfaction rating, you will have to change your a.n.a.lysis.

The raw data (the number of respondents, ratings from each, the date of the response, and what each rating means) is still good, but the way you are presenting the a.n.a.lysis and the a.n.a.lysis itself needs to change.

It helps to have the a.n.a.lysis doc.u.mented in the development plan-again to help you think it through, but also for replication. Of all the parts of the plan, this component needs to be doc.u.mented to allow repeatability. You have to ensure that you a.n.a.lyze the data the same way each time and that any changes are captured since this directly affects the final metric displayed.

A Picture for the Rest of Us.

You"ve drawn a picture of your metric. This picture was an abstract representation of the answer to the root question. Another major component of a well thought-out metric is another picture-one your customers can easily decipher. This picture is normally a chart, graph, or table. Plan to include one in your metric development plan.

Figure 3-5. Visual Depiction.

It can easily be more than one picture. If you need a dashboard made up of twelve charts, graphs, and tables-then so be it. If you"ve done a good job with the root question and abstract version of your metric, determining how you"ll graphically represent the metric should be an easy step. The really good news is that you can"t go wrong with this component. If you pick a stacked bar chart, and later realize it should have been trend lines-you can change it. No harm done.

Not only can you change the way you represent your metric if you find a different structure would tell your story better, but you may need to have multiple representations anyway. This will depend on who the customer is, how each customer will use the metric, and how you will share it. For each group, the manner in which you present the metric may vary.

This component should be fun. Let your creativity shine through. Find ways to explain visually so that you need less prose. A picture can truly tell a story of a thousand words. No matter how good it is, you"ll want to add prose to ensure the viewer gets it right, but we want that prose to be as brief as possible. We want the picture to tell the story, clearly. Don"t over-complicate the picture.

You may, in fact, have more data than necessary to tell your story. You may find yourself reluctant to leave out information, but sometimes less really is more. Especially if the extra information could confuse the audience. You"re not required to put data into your metric just because you"ve collected it.

Also, experiment with different ways to depict the metric. You might even test ideas for the visuals with your metric customers.

Narrative Description.

I love it when someone asks, "Do I have to spell it out for you?" My answer is frequently, "Thanks! That would be nice."

© 2024 www.topnovel.cc