Wither the yellow brick road?

Why do the same names show up time after time in things like the Distributor of the Year program?

What do the Ryans, Pardos, Stones and Maclamores, Minors, Bass, Leffels, Betts, Burkes, Settles, Franklins or Ogburns know that we don’t? (I know I missed some, but you get the idea).

It strikes me that a lot of our industry’s efforts may be misguided in the search for a formula for growth and success. Should we add service, remain parts only, sell on the Internet? Wither the yellow brick road?

Performance Measurement (PM) has been getting a lot of press lately. After all, Peter Drucker preached the invulnerability of ‘process.’ Performance measurement has become one of the pillars of organizational success. We all know how challenging it can be to get right but often we’re not really sure why it is so tough.

I believe that W. Edwards Deming had this in mind when he said that 97 percent of what matters in an organization can’t be measured. He added that the unintended consequence of conventional measurement was tampering, or manipulation without genuine understanding. He seemed to recognize that many of our deepest frustrations come from an “inherent sense of the natural way of life, lost in the process.”

The Obvious Is Not Always Apparent

In looking at a number of case studies of distribution super stars, I see four deadly sins that were successfully avoided:

They never rely just on financial statements or look only at this month, last month, year to date – profit and loss, revenue and expenses – these are measures of important things to a business. But this is information that is too little and too late. Too little in the sense that other results matter too, such as customer satisfaction, customer loyalty, customer advocacy. Too late in the sense that by the time you see bad results, the damage is already done.

Wouldn’t it be better to know that profit was likely to fall before it actually did fall, and in time to prevent it from falling?

Most financial performance reports summarize your financial results in four values: 1) actual this month; 2) actual last month; 3) Percentage variance between them; and 4) year to date.

Even measuring and monitoring non-financial results, we may still be using this format. It encourages reaction to percentage variances (differences between this month and last month) which suggest performance has declined – such as any percent variation greater than 5 or 10 percent (usually arbitrarily set).

Who honestly expects the percentage variance to always show improvement? And if it doesn’t, does that really mean things have gotten bad and need fixing? What about the natural and unavoidable variation that affects everything, the fact that no two things are ever exactly alike? Relying on percentage variations runs a great risk of reacting to problems that aren’t really there, or not reacting to problems which are really there that you didn’t see. Shouldn’t we concentrate on reports that reliably identify real problems that need attention, instead of wasting time and effort chasing every single variation?

They don’t use performance measures to reward and punish people. Counter intuitive? Failing to support culture of learning by not tolerating mistakes and focusing on failure is the essence of short term thinking. In groups of more than 5 or 6 people, the results are undeniably a team’s product, not an individual’s product. When people are judged by performance measures, they will do what they can to reduce the risk to them of embarrassment, missing a promotion or being disciplined.

It is pretty much human nature to be tempted to modify or distort the data, or at least report the measures in a way that shows a more favorable result. No one will learn about what really drives performance and they will not know how to best invest resources to get the best improvements in performance.

They avoid using brainstorming to select key measures, yet they never exclude staff from performance analysis and improvement. Brainstorming inevitably produces too much information and therefore too many measures. It rarely encourages a strong enough focus on the specific goals to be measured. Often, everyone’s understanding of the goal is not sufficiently tested, and the bigger picture is not taken into account (such as unintended consequences, relationships to other objectives/goals).

Looking at available data means that important and valuable new data will never be identified and collected, and organizational improvement is constrained by the ‘stuff we know.’ Adopting industry accepted measures is like adopting their goals and ignoring the unique strategic direction that sets your business apart from the pack.

One of the main reasons that staff get cynical about collecting performance data is that they never see any value come from that data. When people aren’t part of the design process of measures, they find it near impossible to feel a sense of ownership of the process to bring those measures to life. When people don’t get feedback about how the measures are used, they can do little more than believe they wasted their time and energy.

Why set goals without ways to measure and monitor them, yet collect too much useless data and not enough relevant data? Too many organizations haven’t made the link between the knowledge they need to have and the data they actually collect. They collect data because it has always been collected, or because other distributors collect the same data, or because it is easy to collect, or because someone once needed it for a one-off analysis (so they might as well keep collecting it in case it is needed again).

Performance measures that are well designed are an essential part of streamlining the scope of data collected by linking the knowledge your organization needs with the data it ought to be collecting.

Measurement vs. Plans vs. Activity

Business planning is a process that is well established in most high performance distributors, which means they generally have a set of goals or objectives. In the case of real winners, these cascade through the different functional levels of the group.

What is interesting though, is that the majority of these goals or objectives are not measured well. Where measures have been nominated for them, they are usually something like this: Implement a customer relationship management system by June 2014 (for a goal of improving customer loyalty)

This is not a measure at all. It is an activity. Measures are ongoing feedback of the degree to which something is happening. If this goal were measured well, the metric would be evidence of how much customer loyalty the distributor had, such as tracking repeat business from customers, or expansion of product coverage within categories.

Design in the feedback loops from the start. After all, how will you know if your goals, the changes you want to make in your branches are really happening, and that you are not wasting your valuable effort and money?

One last thought. Peter Senge, in a book titled “The Dance of Change,” observed that “Nature has no ends; it builds, continually, upon the interplay of the means of evolution and biology. By contrast, in work based on measurement, the ends justify any perversion of the means, because information about the means, the ways in which things get done, and the ways in which people and work evolve, do not show up in the measurements.”

Bill Wade is a partner at Wade & Partners and a heavy-duty aftermarket veteran. He is the author of Aftermarket Innovations. He can be reached at [email protected].

Learn how to move your used trucks faster
With unsold used inventory depreciating at a rate of more than 2% monthly, efficient inventory turnover is a must for dealers. Download this eBook to access proven strategies for selling used trucks faster.
Download
Used Truck Guide Cover