To “Meet” or “Exceed” Expectations? The answer may surprise you…

Reflections on Friday’s “Rant”

In my last post, I went on a bit of a “rant” against my wireless provider, something I rarely do on this blog in public, and almost never by name. But the breakdowns in everything from how the CSR handled the specific transaction involved, to the underlying design of their processes and enabling technologies, generated such a wealth of fodder on “how NOT to run a CS function” that I really felt compelled to let loose.

Today, there are so many examples of “poor” customer service littered all over the blogosphere and twitter-verse that most of you probably get tired of reading about them (I know I do). But sometimes “venting” a bit on paper helps me get over a bad experience, and since today’s writing tablet is my blog, I figured “what the heck”… As it turns out, I did sleep pretty well after I got that rant out of my system, so apparently, the “therapy” worked for me.

That notwithstanding, I was fairly certain that this post would go largely unnoticed, and that I would awake on Saturday morning with a new attitude, ready to spend my usual “bogging hour” generating some fresh now topics for the coming week (which were bound to be more upbeat and forward looking). At least, that was the intent.

As with most things however, my expectations were once again incorrect. What I thought was a stupid little “throw away” post  on my Friday afternoon CS experience generated about two times the volume of readership than any of my posts in the last several months. While I’d like to think it was “earned” based on good preparation, research, and authorship, the facts are that this was one that had zero preparation, was written on pure emotion and adrenalin, and was probably one of the most “long winded” posts I’d written in a while.

Learning from a bad experience…”Delight the Customer”!!!

Reading through the comments, and reflecting a bit on the post, I realized that I struck quite a nerve in people who are both passionate about Customer Service, and who (like all of us) have had theie share of negative experiences. The responses I got ranged from “resignation” (this sucks, but unfortunately, ” it is what it is”), to comparative reflection on companies who do in fact “get it”,  and a sense of anticipation that type of excellence will once again return to this industry. In fact, reading and thinking about some of the positive examples of CS in our society (both from today and the “good old days”) was little like “comfort food” for me in healing the pain I endured on Friday. What was a negative post, created positive energy. And that was good.

However, amidst all of those gyrations, I couldn’t help but reflect on words that kept popping up as I read the article and reader comments, which mostly revolved around the notion of “delighting the customer” and “exceeding expectations“. These words seem to show up often just after we juxtapose a bad customer experience with a really good example of what it should look like. Phases like “CS should look to (fill in the blank) company, and how they Delight the Customer (instead of really mucking it up like they did)” are very common in these situations.

If we could only have providers that “Delight the Customer”!!!…Those words make us feel hopeful that we someday will return to the good old days and the types of companies who really “got it”.

In the late 80’s and 90’s, we were trained to think differently about Customer Service, and follow in the footsteps of what I’ll call the “CS legends” of that era. And those of us who started our careers in that era, remember all too well those iconic images of companies “going the extra mile”, often  with some kind of dramatic, back-breaking demonstration of Customer Service “heroics”. Most of us probably remember the old FedEx example from the mid 90’s where an employee hired a helicopter to get a package to the recipient (I can’t remember if it was an organ donation, or a 10 year old’s package from Santa, but I digress :)). Whether it’s that anecdote, or one from a current era, that type of  “over delivery” has become somewhat of an accepted  standard for what “best practice” should look like, and the basis around which some of us continue to shape our expectations.

While I don’t excuse experiences like I had on Friday, I must admit that I did begin to reflect on, and actually question what “the standard” should be. What should have been my expectation? What should have been their goal for delivery against that expectation? And if their goal was to “delight the customer”, what should that look like in everything from the process to the behavior of the rep himself?

A New Standard: “Delighting the Customer” and “Exceeding Expectations”?

Some (counterintuitive) perspectives from the “old school”…

I recall working with a older (and wiser) colleague of mine about 15 years ago (he’s even older and wiser now!), who told me that the goal for Customer Service should be to MEET, not exceed the customer’s expectation. And as a relatively young and unseasoned professional, my reaction was something like bull #$@*!

Heck, I probably recited that same Fed Ex story, along with every other example that was floating around in the B-school literature and CS journals in those days. Back then, I would have rationalized my response by telling myself that this guy was “an engineer “after all (no offense to you engineers out there, but in the industry I was in at the time, engineers had developed a reputation among the “Customer Ops side” of the business of being “old school” thinkers and “barriers to change” (an obvious error in judgement by those in CS, but reality nonetheless). Why?, because that industry, which was going through unprecedented change and feverish levels of competition, had developed two competing cultures. Engineers on one side who were literally “keeping the lights on”, and the Customer side of the organization  (Sales, PR, Customer Service, Marketing, etc.) who were often flaunting their MBA’s, B-school pedigrees, and FedEx case studies around the C-suite, with considerable levels of success. “Pragmatists and doers”, versus the “ivory tower thinkers”. Always a recipe for disrespect of alternate views, and perhaps a subject for a future post.)

At the time, I remember thinking to myself, “this guy really has his head in the sand “(or somewhere less desirable!!). His words were so foreign to me, and it sounded so ‘ass backwards’. After all, in addition to all of the new “feel good” CS legends and case studies, surely there were the old adages of “the customer is king” and “the customer is always right” that he should have been tuned into. So how could anyone think that “exceeding expectations” could be ANYTHING BUT “universal truth“?

Well, we’ve both moved on from there. We’re both older and wiser on many issues, and I do enjoy seeing him occasionally and sharing a good cigar. While we’ve never really talked much more about that specific exchange, working together in the years after showed me enough about what he really meant.

What he was getting at was this: that we, as service providers spend so much time trying to beat the standard that we often miss it entirely. He was also saying that when we try and envision what it will take to truly “delight the customer”, we often get it wrong. That is, we often don’t take the time to know what will delight the customer or not. And if we get it wrong, it becomes a slippery slope.

To “Meet” or “Exceed”?

While you may agree or disagree with his perspective, or its application in the real world, there is something to reflect on here.

But what happened on Friday had nothing to do with failing to “delight me”, or “failure to over-deliver on anything”. What it was, was an abysmal failure to meet even the most basic expectations. And now that I look back on it, many of the things the company may have “thought” it was doing to “delight me” (new kiosks, new ‘sign in process’, flashy technology, etc), were actually viewed by me a “background noise” to the transaction I was there to take care of.

Fact is, the simple act of meeting basic expectations can, and often does, drive BIG success- both in terms of magnitude and sustainability.

Think about McDonalds. Same (or very close) experience- Every product, every store, every time! They thrive on CONSISTENCY of the core product, and very little in my view on “exceeding expectations” and “the delight factor” (at least in the context of the “legends” I referenced earlier). Most people that frequent McDonald’s expect what they expect, and get it consistently.

Southwest Airlines is another good example. Come on!!–An airline that actually made money when everyone else was losing their tails, …and they don’t even have first class cabins or in-flight meals? That’s right. Because they MEET expectations that they set. No surprises (which in my view is a bigger key to Customer Satisfaction than over-delivery)

Of course, I fully understand that some  of those who preach the principle of  “delighting the customer”, are really saying the equivalent of “over deliver on the promise you make (whatever that promise is) and then give that “little extra touch” (or what we used to call in New Orleans where I grew up, “lagniappe”– which in the Creole dialect means “a little bit extra”).

In fact, I believe that is exactly what both Southwest and McDonalds do. Not that they set the bar low, but they set it commensurate to the market they serve and target, and then service exactly to that standard. Then, perhaps where opportunities present themselves, they surround that experience with small touches of the “extras” in the way of smiles, humor, and courtesy. It’s all relative to the standard you set. And consistently delivering against your standard is a sure way to profitability.

Over delivery sometimes backfires…

Unfortunately, there are many practitioners, trainers and consultants that still interpret the “delight factor” as the type of dramatic heroics exhibited by the old legends of the game. The problem with this aspiration, no matter how noble, is that it often takes  your eye off the ball so to speak, and distracts attention from meeting the core expectation. And that’s the main lesson I took away from my old colleague.

While “exceeding expectations” may score you some points, it can also be a slippery slope for a few reasons. First, you may not know what that elusive “delight factor” or threshold is for a specific customer or demographic. It’s hard for even the best companies to “get right” even with decent  market intelligence, and you usually don’t have a lot of “practice shots” to test your hypotheses without experiencing some fallout. More importantly, over-delivery on something that doesn’t matter to customers, AT BEST loses you a few bucks, and at WORST serves as the type of background noise mentioned above that will only frustrate a customer more. And with the state of our Market Research and survey effectiveness these days (see comments on the above referenced post), you have more of a chance getting it wrong than right.

As an example, I am reminded of two separate instances where my flight left 15 minutes early, clearly “delighting” a few, but not me and some other passengers who were stuck in security both times. And yes, I was an hour early in both instances. Thankfully, it was before blogging so ya’ll didn’t have to endure that rant. It was ugly, I can assure you.

Over delivery can also cost you dearly. Not always, but every investment in a customer is likely to have a point of diminishing returns. And in this economic climate, you need to make tough choices on how you will differentiate, compete and win. Often, competing on core product and core delivery is a winner.

One again, Wisdom Wins…

Getting back to my old colleague for a minute. The fact that he was an engineer (a profession I have since learned to respect greatly) did give some insight (albeit a few years later) as to why he believed what he did. Of course, some of it was based on his experience with customers directly over the years. But some of it was based on his own history.

You see, if engineers get it wrong, delivering outside of spec on either side, then it’s usually “lights out” (figuratively and sometimes literally). I would suspect the same with accountants, airline pilots, and any other industry where meeting expectation is the first and often only objective. So it’s a philosophy that shapes them, and to realize that will help us all understand their words and perspectives better. But the reality is, that none of them would probably take issue with the “delight factor”, but they will also never put that as priority 1. And I, for one, don’t want them to.

Understand the expectation, set the bar, and deliver on  it. If, after all of that, you’ve got a little “lagniappe” left to offer, have at it…

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

A Customer Service Rant- Rare but necessary…

Well, Friday has come and gone, and the weekend is now here. I’ve written a bit more than usual this week, enabled mostly by a lighter travel schedule than usual.

I’d be remiss if I didn’t say thanks to all of you bloggers and tweeters out there. I always learn more from your feedback, than I do from my own writing. I am convinced however, that you’ve got to work both sides of the equation- ‘give to get’ so to speak. I can only hope my posts this week made a “small dent” in what I’m sure will be a perpetual debt balance I hold in the “‘bank of innovation and improvement ideas” fueled by  the blogosphere and Twitter-verse.

All in all, it was a good week. But today was most interesting, from the standpoint of a few Customer Service experiences that left a lot to be desired in my eyes. I’ll leave you with a few parting thoughts from one those interactions, which I’m sure will end up being fodder for some of next week’s posts.

Earlier this afternoon, my wife and I had to run three errands, all of which I estimated would take all but  an hour or two tops. One of those involved what I believed would be a quick run to the ATT store to re-evaluate my mobile WIFI data card and plan, and evaluate some new options I’d been thinking about. While the other two errands were less than perfect from a customer satisfaction perspective (I’ll spare you the details on these), the ATT experience was absolutely priceless in terms of providing new fodder on the topics of business improvement, and specifically “how not to run a customer service storefront”.

In today’s environment, many businesses elect to leave their “storefront” open for a variety of reasons, despite the myriad of online and mobile options available to their customers. It’s no easy decision for them to do this, and the pressures to go the other way and abandon them altogether, are more than compelling. So when they are left open, be it for product sales, or specialized service, I would imagine the company would want to extract maximum value from that decision. In fact, if I were the CCO of that company, my strategy for any local offices that were maintained (left open) would look something like this:

  • Create the most welcoming and helpful environment for customers to shop and buy my products
  • Make sure than any service I elect to provide in that office to existing customers (which, because they are provided in the local office itself, will be visible to my new shoppers/ future customers) is first rate and without blemish
  • And where possible, begin to show existing customers (that are there for “service on their existing account”)  other channels that are available to them to help them (THE CUSTOMER) make their life a little easier, and perhaps even deliver a BETTER level of service than they had before (which would really be a true win win: make the customers life easier and lower the company’s cost at the same time!)

On this particular day, and at this particular store, ATT (and I rarely name a company when I rant on a CS experience!) failed miserably on all three counts. I was there to take care of several things with which I was having trouble taking care of online. I’ll spare you the details on the entire experience, but there was one aspect of the interaction that was truly mind-blowing for me.

As a backdrop to the story, I am a very heavy “data user” given the degree to which I travel.  I have always had an unlimited data plan, which I was “grandfathered” into  a few years back when ATT converted over to a plan that was less expensive, but capped the user at 5 gigs of data. While my usage is significant, I felt I was still undershooting the capped plan even in my heaviest months. So while the lower cost was appealing, I was still worried about the probability of me overshooting the cap, and what it would cost me if I did.

Specifically, my question was how much I had used in the past several months, whether or not I was operating below the 5 gig threshold, and in the months where I may have gone over, what the overage charge would have been. I tried to answer those questions on line but was unable to get a clear enough picture, so I figured I’d just stop in and get some one on one assistance in dealing with the problem. Some may say that this is a simple “self service” transaction, and maybe it is, but as someone who is fairly familiar with online and mobile channels, I found this one to be more difficult than usual and figured I would have more success in person. I had considered calling them by phone, but I had dealt with this store before and expected this to be relatively painless.

While at the counter, I began to describe my issue to the rep- an issue I would have expected them to have faced hundreds of times previously. After several quasi blank stares, I explained the problem again, this time illustrating on a piece of paper (complete with illustrative bar charts) exactly what I was trying to evaluate. Still no luck. And that’s where things went south.

Progressively watching the rep deal with this mathematical dilemma was  like watching a robot get short circuited in front of me, not to mention the other shoppers (prospective customers) that were in the store watching this unfold :

  • First, we had to endure him searching several different screens for the usage pattern (10 minutes (felt like an hour)) for him to find two months worth of history). He tells me it would have been easier for me to do it online because the customer has access to more info. Nice try. I’d already tried to locate the usage history on the site and it was like looking in a maze. That’s why I was there to begin with.
  • After concluding that waiting a for him to find another 2 months of history would have taken another 10 minutes, I decided to rely on the one month overage that occurred in December (which was about 1 gig over the 5 gig threshold) to begin to begin the process of  ascertaining what the overage “charge” would have been. Again, like a deer in the headlights…
  • After watching another blank stare for a while, I just laid out the complex math problem for him as simply as I could (Cost per kb * the kb overage). Calculator in hand, he does the calc 3 or 4 times, and tells me (God honest truth) $20,000. I tell him that’s impossible (like I really should have had to!) because if I was on the 5Gig plan, I would have only paid $45, and while I did go 20% over, I seriously doubt the overage would have been $20 grand. That’s one heck of an overage fee, eh? I’m a bit impatient at this point, as he keeps recalculating and coming up with the same $20,000 answer, over and over again…
  • Now for the climax of the story… he kind of gives up, shrugs his shoulders and says (you cant make this stuff up!!!), “well sir, I find it hard to believe too, but all I can tell you is that most people don’t look at their bills and just pay it. Sir, I’ve not had a question like this before.”

Now, had this been an agent with only a few days on the job, I may have cut him some slack. But this guy has been there more than a few years. I was rather speechless, as were the 3 people around us listening and watching this painful exchange. In a state of amazement (the emotion of frustration had left me by this point), I decided to leave and take care of this by phone, or perhaps by spending another hour or so back on the website trying to figure it out myself. But my confidence in this being a simple exercise was shot. All I was sure of, was that taking care of this myself  would be easier than enduring any more of the “in store” interaction.

And therein lies the “lessons learned” from my Friday afternoon at the ATT Store:

  • If the company wanted to encourage self service transactions in lieu of “in store” transactions— mission accomplished
  • My confidence in the website and mobile transactions being easier for the customer —0
  • Confidence of the other shoppers that ATT’s “after the sale service” will be a good experience—- 0
  • Likelihood of trusting another bill from ATT without being accompanied by a forensic accountant—Less than before I walked in the door.

Seriously though, I actually  do believe that ATT is the right vendor for me at the moment, given the alternatives, and my rather long and uneventful relationship thus far with the company. In fact, there are some unique things about their product that I can’t get from their competitor. So I will likely remain a customer despite the experience on this particular Friday afternoon. For now, I’m going to just chalk all this up to the guy just having a bad day, the CSR equivalent of a “senior moment” or perhaps a minor stroke!

But I believe the real story here is about the state of Customer Service in general, and how the industry is executing the transition to more self service technology. I genuinely don’t believe this is a problem unique to ATT.

There is nothing wrong with trying to shift customers to lower cost channels, be it for payment or general inquiry. But we appear to have swung so far to that end of the spectrum, that the staffs that are left to deal with specialized problems have forgotten what Customer Service is, and more importantly why it exists in the first place.

In the end, I think we will ultimately get all of this into equilibrium, but it is incumbent on CS leadership to make the transition to new technology, new channels and new processes a much more deliberate one. And that will be dependent on their ability to design processes, factoring in the customer experience at a level equal to, or greater than the cost savings involved. Not to say that cost savings aren’t paramount, but it cant be a trade off at the expense of the customer, all else being equal.

Win win solutions that enhance customer service, while still producing a much lower cost structure for the company, are out there. I am confident in saying that. It just wont happen inside of cultures like the one I experienced today. Let’s just hope he was having a bad day.

CC: ATT ???

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

Visualizing Waste

I received some good feedback via comment, twitter, and email on yesterday’s post regarding what I referred to as  “Metric Hoarding (KPI overload). Some of that feedback was from clients facing that very challenge today, identifying well with the issue of “KPI overload”. Many of you offered your own thoughts from your own experiences which is always helpful in refining and building upon my own ideas and solutions. I guess that’s the whole purpose of social and collaborative media. Pretty cool stuff.

But one reader/ twitter colleague (Redge – twitter @versalytics), and author of his own blog “Lean Execution” )used a pretty compelling analogy in his feedback to me. TIn amplifying my point about “KPI overload” and the need for some deliberate “pruning” of your metrics database, he used the analogy of “people who leave their windshield wipers running after the rain has stopped“. He used this metaphorically of course, to highlight a scenario where a process continued long after the result was accomplished.

And that was the main point I was making in yesterday’s post –That when we keep reporting on things long after the report or measure is unnecessary, we produce waste! And waste impacts everything from productivity to profitability. It destroys organizations and their cultures from within, and can propagate like a cancer if left unresolved.

I think that’s why analogies like this are so useful. They are so simple to understand, yet so powerful in getting leaders and managers to “look in the mirror”, do some deep reflection, and begin to see how their own processes may be  driving waste within their own companies.

As an aside …

If you’ve ever worked around a real Lean practitioner, you’ll quickly realize that are no shortages of these metaphors. So much so, that I’m sure there exists a reference book somewhere that has consolidated every lean metaphor that can be associated with waste into one single volume. In addition to all of the great value that the Lean discipline has delivered to industry in recent years through their tools, methods, analytic frameworks and facilitative culture of problem solving; the thing I’ve learned the most from them is their ability to simplify complex problems and increase the likelihood of a good solution.

In my view, visualizing the problem is critical in identifying, understanding, and ultimately solving it. Most often, we use visualization in a positive way (helping us see a bolder aspiration,  clearer pathways, and bigger success). Golfers, for example, always try and visualize their shot (shape, trajectory, landing spot etc.) during their pre-shot routine. And I can honestly say, that it does help- mostly because it clears negative thoughts and fear from the mind right before you have to execute. Using positive visual cues does work.

In much the same way as a positive metaphor helps us identify with success, “problem oriented” ones can work just as effectively; helping us see problems more clearly and begin to understand their drivers. We’ve all seen or heard these types of analogies from time to time:

  • The driver who keeps getting a flat in their front left tire, and over time has masters his “productivity” in “changing the tire” (faster and smarter at changing the tire)…all the while failing to ask why the tire was blowing out in the same spot every time.
  • The man who keeps falling an a hole on his way to work, and focuses his energy on how to climb out faster …rather than simply changing his route!
  • Why car washes have people towel drying your car long after after the mechanical dryer has been installed.

All of these analogies paint a clear picture of the problem, while also making the problem appear less daunting to solve. They “clear the fog” (so to speak…:) ) and help get us more quickly to designing and deploying a better solution.

So what are some other good visual cues that can help identify more sources of waste within our companies and our lives?

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

Too Many KPI’s?- Tips for Metrics Hoarders…

One of the questions I get asked often by my clients is “just how many KPI’s and metrics are enough for effective Performance Management to occur?”

I think most of them realize that there is no hard and fast rule, and probably no “right answer”. Some may just be trying to find out if they are “in the right ballpark”. But I think the majority of those asking this question are asking it rhetorically. That is, they believe “in their gut” that their measurement system has gotten a bit unwieldy, and is starting to create breakdowns, confusion and loss of whatever  momentum they once had. And based on my experience, they’re usually right.

Are we talking KPI’s or Metrics?

Now some of the Performance Management “purists” out there might say that we first must know what we’re  talking about when we say the words ‘too many measures’. Are we talking about KPI’s or performance metrics? Are we talking “business” metrics” or “operating metrics“? Are we talking about “data elements” (inputs), or are we talking about the calculated metrics that show up on our dashboards and scorecards that utilize these data in their algorithms? Or are we simply talking about operating data and variances that we routinely report and track in our budgeting and forecasting environments?

For today’s purpose, I’m going to stay clear of those distinctions. Mainly, because having too many measures can cause problems regardless of the type of measure it is. But also  because we’ve had more than enough posts this week (including mine)  that have discussed and debated what I would call “the semantics” or “the lexicon” of EPM. (Dashboards versus Scorecards, Enterprise vs. Portfolio Performance Management, Business Performance Management vs. the historical HR view of Performance Management (which interestingly suggests that HR is trying to manage something OTHER than “business results”- huh?). I even saw a post yesterday on “Application” Performance Management which suggests to me that we are dangerously close to every business function or process needing its own definition of Performance Management (along with the associated buzzwords and community of followers) for it to be effective. I’m sure there will no doubt be plenty of columns and blogs that will address these, and  the myriad of  new disciplines that emerge  from continued innovation and new technology in the PM arena. No need to waste any more time (other than what I just did!) on that here.

Instead, try to answer the question more directly based on my own experiences. I’ll also use these experiences to reveal the implications of going too far beyond what I believe is that optimal number of KPI’s that should be used to manage the enterprise.

Some “rules of thumb” from my experience…

If I were pressed to answer the question “how many KPI’s are enough?” directly, I would of course  “hedge” a little by opening with the “well, it depends” caveat. But I would still be comfortable laying out some broad rules of thumb that I think reflect the needs of an “average company”.

For example, let’s assume the business has a clear and compelling vision and mission, a handful of established  business goals, and subscribes to the ‘balanced scorecard’ notion of 4 or 5 perspectives (categories or stakeholder groupings if you will) within which goals, objectives, strategies and KPI’s are ultimately managed (i.e. Customer, Financial. Employee, Operating, etc.). See levels 1-3 of the below graphic.

Within that scenario, I would say that the business should have a few key objectives within each of those groupings (left side of Level 4 boxes below). For example, in the Employee Area, one objective might be to “Maintain a Safe Workplace”, while another may be to have “Motivated and Engaged Employees”. My experience is that there are usually 2-5 (max) objectives for each major perspective or grouping. For each objective, you would then have anywhere from 1-3 KPI’s (right side of level 4 boxes below) to measure the success of each.

In my experience, that’s as far as I go with what I call Key Performance Indicators at the corporate level. Do your own math, but my experience is a total of 20 to 30 KPI’s max. Larger companies with multiple business units (especially if highly diversified) may have significantly more than that, while smaller organizations with more limited focus may have less. But again, 20-30 is a good rule of thumb.

A Sample EPM Architecture

That notwithstanding, this is always “context dependent”. For example, if a company decides to replicate this infrastructure in each business unit, the numbers would increase proportionally. But the number of “corporate KPI’s“, those managed at the Enterprise Level would still lie in the 20-30 range

If you’re jumping out of your seat right now, you may be one of those who believes at your core, that KPI’s should be a very small set of things that are “supercritical” to business success (as in 5 or 10, not 20 or 30). I’m not going to take issue there, because I happen to buy into that principle. In fact, I often will extract 5-10 indicators that for the business may be truly “key” or very essential to their success. But for me, I tend to view these as strategic KPI’s or goals that distinguish them from the full suite that I referred to earlier. The full suite (universe)  of KPI’s for the business still falls in the 20-30 range.

Now, assuming many of the KPI’s are “calculated” off of other data or metrics, and assuming that the company would desire to view each by a number of different dimensions (time period, segment, geography, etc.), its easy to see the number of data elements (whether they are viewed as metrics, indicators, or source data) can jump well into the thousands or even tens of thousands.

It’s all part of the “Roadmap”

Ok- maybe I went into the jargon and distinctions more than I wanted to, but I wanted to give you a sense of why I believe the magic number of KPI’s (KEY performance indicators) is where I placed it…in the 20-30 range. And to do that, I guess I did need to define and explain the framework a little. Hopefully, the  chart above gave you a sense of how that architecture fits together.

But even if you buy into the structure I laid out above, there will always be variants. So don’t feel completely bound by the structure or words that I am using here, or the “rule of thumb” benchmark I’ve laid out. The exact number will always be unique to a business environment, but that number should be part of an overall roadmap of how you want to manage your EPM program.

The reason for the type and number of KPI’s you select, should be to create balanced focus and direction, while aligning to your desired end point. Every business needs to have a small but balanced set of management perspectives (i.e. more than one) that they manage within, but not so many that it gets diluted and distracting. Same with the objectives you set, and the KPI’s that support them. There need to be enough to reflect business focus and priority, while stopping at the point at which it creates clutter and confusion. For me, that equates to a KPI number in the 20-30 range.

And while drill downs and analytics will add exponentially to the number of metrics and data points accessed, they still reside in a structure which revolves around those 20-30 KPI’s. There are also some psychological reasons for the why we break our objectives and measures  into convenient little chunks of 3-5 within each perspective, and sometimes smaller chunks within those, but that is a secondary factor that only amplifies the rule of thumb that I laid out.

Consequences of “metrics overload”…

As I said earlier, my experience is that when someone asks me a question around the number of KPI’s they should have, it’s usually because there are in fact too many, and that quantity itself has created a dilution of focus. Or maybe its because the structure within which they reside is losing clarity and alignment. Usually it’s a combination of the two, and one drives the other.

But the underlying cause of all of this is usually lack of preparation on the front end. Perhaps the organization jumped straight into a technology fix for what really is a process and cultural challenge. In these cases, the organization may have procured a tool that only works if you populate it with data. So they rapidly populate their dashboards and scorecards with as many metrics as they can think of, not considering the critical connections, relationships and architectural dynamics at play.

Another reason is lack of ownership for the process. Sometimes it starts well, but without continued governance of the process, each business unit runs with their own vision of what “their” scorecard should look like. Sometimes, a clear and cogent integrated architecture miraculously appears out of the ashes, but more often than not, you end up with several different views of what success looks like. Metrics end up being  misaligned, or worse, conflicting across multiple business units in the same organization.

Whatever the reason, the problems of “runaway metrics” often manifest because there really was no EPM “Program” to begin with (no unifying framework to build upon)–just a set of tools to report and analyze data, and if you’re lucky, a few guidelines for how to use “the tool”. In those environments, its not uncommon for us to find medium to large companies tracking hundreds or even thousands of things that middle and upper management refer to as KPI’s, without any real semblance of an EPM program or platform in place. And no matter what definition or distinction you want to use, that’s way too many.

And it only gets worse…

Complicating all of this, is day to day realities of management reassignments, the natural comings and goings associated with staff turnover, and sometimes major changes in leadership that can (and often should) initiate a change in course, along with the emergence of new navigational beacons and waypoints (i.e. new KPI’s). But rather than changing the structure of their EPM platform, and replacing one set of waypoints with the new ones, they simply layer the new set on top of the old.

At some point, the companies become what I call  “hoarders of metrics”, and before long, an otherwise harmless but impotent process, begins to look like utter chaos.

While the word “layers” often has a negative connotation, they are often useful in establishing an architecture for a solution, and by design, can actually create strength within the system. The architecture I describe above, and the type of “line of sight” thinking I described in yesterday’s post are examples of how this can be used to strengthen your EPM program.

But when the layering is done without a deliberate structure and blueprint, these layers (new metrics on top of old, bad metrics on top of good) can, and often does, cause the system to collapse under its own weight.

Getting it back under control

Here are a few guidelines and “healthy” practices for getting your measurement framework leaner, better aligned, and back into focus:

  • Know where you are today-Without getting caught up in all the lingo, do a simple inventory of your process vis a vis the framework I laid out above. Ask yourself, how many objectives you have within each key domain. How many top level metrics are there that support these objectives (things that senior management routinely uses to monitor and talk about)? Are there metrics we track that are redundant and perhaps don’t support any of our objectives? Are some objectives missing the metrics needed to track their achievement? What does that picture look like for you in terms of structure and number of KPI’s, and is some “pruning” warranted?
  • Commit to an Architecture – We all acknowledge the advice “measure twice, cut once”. That’s even more important here, as the organization can only withstand a failure or two in trying to get a performance management system in place without experiencing major “cultural fallout”. Continuing to “experiment” with measurement and KPI’s without having an architecture and blueprint to guide that process (even if its a crude one) is setting the stage for many of the above challenges and breakdowns.
  • Understand the role of the KPI (versus other parts of the structure)- I’m not talking about getting hell bent on semantics, but I do think there is some value in teaching the organization the difference between a KEY performance indicator, and the myriad of other data and statistical fodder that may be used within the overall system. Use the rule of thumb I laid out earlier to guide and test whether this distinction is sinking in.
  • Build down, Not around– If I had to pick a direction to build your EPM architecture, I would say start at the top and work down. Once you are at the KPI level, you should be able to start allocating out accountabilities for their achievement, and then, if your culture supports it, those accountabilities can be disbursed in a measurable way to your staff via individual and work team metrics, from which appraisals and reward structures can then be linked. I differentiate this from the notion of building “around”, in which I mean taking the concept of EPM (a prototype) and repeatedly testing it out in new areas and business functions, without any clear roadmap or enterprise structure in place. While that can certainly kick start measurement activity and get things moving, it can also propagate some bad thinking if the underlying architecture and core practices at the enterprise level are not in place.
  • Establish a vetting process for new metrics– Its important to recognize why people develop metrics in the first place. We’d like to think that its all from a noble ambition to help the company improve, out of a pure hunger for data. But it goes way beyond that. People develop metrics for everything from defending their turf to getting their point heard. Since data is now the currency through which corporate truth in now established (which is a good thing), so don’t be surprised when the number of metrics begins expanding exponentially. At that point though, you want to make sure your core system does not get infected with junk; so to prevent that, make sure you have a process or checklist to vet any addition of new metrics into the corporate framework.
  • Set aside time for “pruning”-Every strategic planning process, should have a step in which KPI’s and the underlying performance architecture for the business is reviewed. Measures that are no longer relevant, or no longer adding value should be dropped. Unclear linkages upward and downward should be evaluated and strengthened. New business objectives should come into play along with their companion KPI’s, but more often than not, they should end up replacing a measure that may have gone away or diminished in importance.
  • Don’t be afraid to “cycle down” a KPI – Sometimes, pruning won’t involve eliminating a measure entirely. For example, if your ambition was to imporve or optimize a measure, and you’ve now achieved the optimal point, it may be time to simply go into maintenance mode and start reporting on a lower frequency (weekly to monthly, monthly to annually, and so forth). Think about how many of your measures don’t change one hill of beans throughout the year, yet they continue to take up valuable “real estate” on your dashboards and the often scarce mind-space of your executive team.

Anyone who does yard work or gardening knows that “pruning”, while it involves cutting back, is really designed to produce a healthier and more vibrant plant, shrub or tree. And rather than producing growth outward (taller and wider), it instead encourages the growth the be “fuller” and often “healthier” in future growing seasons. Next year’s growth fills in those “holes” that may have been unsightly, and encourages a more deliberate  growing and robust pattern. Hence, your long term plans for the garden and landscape start manifesting and coming to life the way they were initially envisioned.

That same kind of annual pruning and renewal process can be just as effective in establishing a healthy growing pattern for your EPM initiative, a pattern that can otherwise get interrupted by the confusion, distraction and conflict caused by an unwieldy and burdensome performance measurement process. And, as with everything from gardening to weight gain to maintenance of our autos, its always easier to manage it along the way rather than waiting until there is a problem.

And with that, I’ll wish you all happy “KPI pruning”!!

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

Line of Sight: The essential ingredient in “world class” Performance Management…

What is “Line of Sight”, and why does it matter?

For me, the words “line of sight” conjure up a lot of mental images ranging from the fighter jet “locking on” to a hostile enemy aircraft,  to a rifle shot zero-ing in on a desired target, to a satellite vector controlling a local GPS receiver in your vehicle.

Whatever image is produced for you when you hear these words, it is likely to be a good metaphorical reference that will be helpful in designing or refining your Performance Management system.

For me “line of sight” thinking is one of the most important principles in the whole Performance Management discipline. And  it is the absense of that thinking that  is creating many of the challenges and failures with our organizations.

When I describe this concept to my clients who are responsible for EPM within their organizations, most will admit that it is the very thing that is lacking for them. They may have the best metrics, systems, and reporting structures known to man, but without that “line of sight” connectedness, it may all be for naught.

For example, how many times do we hear employees and managers resist change because they don’t think management even has a defined strategy? Or that the things they are measured on really produce value to the bottom line? Or worse yet, that accountabilities (where they exist) are simply ideas dreamed up by middle management with no connection to what the executives really want, or what is needed by the business? Broken linkages can occur at any of these levels, and most anywhere in between.

We spend millions on key elements of our Performance Management program without ever tying those parts together. In any EPM program, there consists lots of components and moving parts, all of which cost money and time to build (IT tools, HR tools, Dashboards, etc.) But think of spending all of that money and time, but failing to establish the critical nodes or tie ins that make the system work cohesively. Or having barriers to those systems that prevent them from functioning effectively.

Think of  those infamous GPS signals that go out or “recalculate” at the very time you are at a critical juncture in your journey. Or as I read in a good post a few days ago, the frustration you feel when you get into your rental car and can’t get the GPS connection until you’re already on the highway going the wrong way (simply because they don’t work while in the rental car garage because of the concrete structure). When that GPS linkage breaks, it is of no value to anyone. For me, perhaps the most frustrating thing is when I am on the golf course, and my “personal (GPS) caddy” suddenly has a “senior moment” (failed connection) right before a critical shot! Talk about slowing the “pace of play”!!!

So where are these missing linkages?

So what are those “missing links” in our Performance Management programs that can destroy these critical linkages? Here are a few that come to mind for me:

  • Absence of a clear compelling vision for the business, and/or failing to communicate it effectively
  • Failing to tie your mission and objectives with your vision in a clear and cohesive way
  • Having a laundry list of KPI’s that are seemingly random, inconsistent, or otherwise “detached” from the objectives they support
  • The presence of KPI’s that lack clarity as to what they are, or what comprises them (I’m thinking of those convenient “indexes” that roll up several measures (via an algorithm), and ultimately translated acronym that only a select few managers can even pronounce, not to mention understand!!)
  • Failure to understand or communicate where the underlying data even comes from (produces doubt and undermines the “perception” of data confidence even if the data are valid and reliable!)
  • Lack of connection between each KPI, and the initiatives that support their improvement (targeted improvement projects, new systems or technology deployments, large CapEx programs or projects, etc.)
  • Failure to link individual performance metrics, appraisals, and employee development efforts to your underlying KPI’s
  • Absence of a back-end “value capture” process that ensures completed initiatives produce their expected ROI (i.e. a real, measureable and visible change in a KPI AND the associated impact on the bottom line (e.g. EBITDA, Market Share, Revenue Growth, etc.).
  • Inability to effectively link reward structures to all of the above.

These are only a few that are “top of mind” for me at the moment. But the list goes on and on.I encourage you to reflect on where these breakdowns occur in YOUR organization. Only then can you deploy some critical fixes, and apply some of the essential glue that is needed between the fractured linkages.

The missing piece of the puzzle?

What causes “line of sight” breakdowns ?

Of course, the failure to establish these linkages can occur for several reasons, some of which are not apparent on the surface.

  • First, certain parts of the process may in fact be missing all together. For example, while we all think we may have a “clear vision” for the business, most do not. Clarity is one thing. But making it an aspiration that is both compelling and engaging is much more essential to your downstream success. Without it, those follow-on connections become far more difficult to establish, and can be like trying to bind wood to air.
  • Another reason, is that often many different organizations and processes have responsibility for different pieces of the EPM puzzle. You can read about this in another recent post about the impact of this, and how to begin linking these processes together. But suffice it to say, we’re talking about everything from IT to HR, and Corporate Strategy to Finance, and many processes in between. Having accountability for the entire EPM process is the critical first step in repairing this type of integration breakdown.
  • Finally, but on a related note, a very significant problem resides in the way the organization treats the management and execution of projects (I’m talking about those small improvement projects, to the largest capital projects that exist within the enterprise). This is part of a much bigger topic which I call “commitment management” which I have written about previously- which at its core, is really all about the organization’s culture and how it manages commitments and “promises to deliver” (and what happens when they do and don’t deliver). For many organizations, the linkage between “project management” and “performance management” is one that is rarely even thought about in this context. And as unfortunate as it may be (albeit unintentional I’m sure), consultants and IT vendors are in large measure actually responsible for this lack of clarity through the myriad of solutions (and their variants) that they continue to flood the market with (EPM, BPM, PPM, BI, etc.). I mean, come on…really?

What can I do about it ?

Just as I indicated when discussing where the breakdowns reside, the list of causes can be much broader and more complex than simply those referenced above. And many of them can be just as debilitating. But starting this dialogue, and initiating the thinking around these breakdowns and causes is a clear first step.

So I encourage you and your team to do a little bit of serious thinking on this topic and create your own inventory of where and why these disconnects occur in your business. We at onVector have done a lot of this work and  developed what we call “Performance Integration Maps” that help our clients identify, visualize and address these types of gaps in their Performance Management program. But whether you use a consultant to assist in thes process, or do it yourself, the important thing is to make it a priority and take the first step.

Resolving the issues can also be a challenging endeavor, and can sometimes take months to years to get the culture aligned in a way that supports it. But the value can be enourmous to the achievement of your organizational objectives, KPI’s and the associated bottom line impact. And when that type of culture and thinking is in place, it becomes an institutionalized “way of life” for the business.

Imagine for a moment the magnitude of investment, and time it took for you to get the various component parts of your performance management process in place. For many of us, “line of sight” can be the key ingredient in allowing you to harvest that impact and ROI that you initially desired. And to moving forward without it, is to allow your team to continue flying blind.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com