Garbage In-Garbage Out…

One of the age-old problems we encounter as performance managers is one of data reliability. While it should be, intuitively, the most important aspect of performance management, it is, relatively speaking, given much lower priority than its more “sexy” relatives.

ERP’s, data warehouses, analysis engines, web reports…the list goes on. Comparatively speaking, each and every one of these important PM dimensions gets its fair shake of mind space and investment capital. But as the old adage goes, “garbage in/ garbage out” (GIGO). We all know that data quality is a necessary pre-requisite for any of these tools to work as designed. So why is it that so little time and attention goes into cleaning up this side of the street?

Tell me you can’t identify with this picture. You’re sitting in a Senior Management presentation of last quarter’s sales results. Perhaps you’re even the presenter. You get to a critical part of the presentation, which shows a glaring break in a trend which has been steadily improving for months. It signals the obvious- something bad has happened and we need to address it now! Conversation turns to the sales-force, the lead qualification process, the marketing department, competition,… 45 minutes later- no real clarity, except for lots of “to do’s” and follow up commitments.

Fast-forward two weeks (and several man-hours of investment) later. The Sales VP is pummeling one of his sales managers to “step up” the performance, and wants new strategies. A new commission structure is discussed, which brings in the need to get HR and IT involved. A few days later, when working on implementing some of the new strategies, a new story begins to unfold. An IT analyst, deep in the bowls of the organization astutely recognizes THE big missing piece of the puzzle. You see, last month, the manager of the Eastern Region changed the way he wants to see “sales-closes” reported (the way deals are essentially recorded), from one that is based on “client authorizations” to one based on “having the contract in hand”- a very useful distinction, particularly when viewed from a cash flow and accounting perspective. The only problem is that it was applied locally, not corporate wide, resulting in the apparent data anomaly.

Sounds a bit too simple for a modern corporation, well into the technology age. But unfortunately, this kind of story is all too common. We all understand the principles of GIGO, yet it continues to chew up corporate resources unnecessarily.

Overcoming the GIGO problem should be our number one priority- before systems, before reports, before analysis, before debate, and before conclusions are drawn. Before anything else, data quality is THE #1 priority.

Here are a few tactics for getting a solid “data quality” foundation in place:

1. Understand the “cost of waste”-

We measure everything else, why not measure the cost of poor data quality? Take a few of your last GIGO experiences and quantify what the organization wastes on unnecessary analysis, debate, and dialog around seemingly valid conclusions gone awry. This doesn’t have to be complex. Do it on the back of an envelope if you have to. Include everything that goes into it, including all the levels of management and staff that get involved. Then communicate it to your entire PM team. Make it part of your team’s mantra. Data quality matters!

2. Become the DQ (Data Quality) CZAR in your company-

Most performance managers got where they are by exposing that “diamond in the rough”. We got where we are by using data to be an advocate for change. It’s hard to imagine getting executive attention and recognition for something as “boring” as getting the data “right”. But that is what needs to happen. The increased visibility of post-Enron audit departments, SOX initiatives, and other risk management strategies have already started this trend. Performance Managers must follow. You need to embrace DQ as something you and your department “stand for”.

3. Create Data Visibility-

In some respects, this has already begun, but we have to do more. Our IT environments have the potential of disseminating information to every management level and location within minutes of publishing it. But let’s go one step further. Let’s “open the book” earlier in the process so more of those who can spot data issues earlier can participate in the game. What I’m saying here is that people have different roles when it comes to performance management. Some are consumers, and some are providers. It’s just as important to create visibility for the input factors, as it is to publish those sexy performance charts. You’ll get the input of that 4th level IT analyst I discussed above, much earlier in the process.

4. Utilize External Benchmarks Where Possible-

Benchmarks are often used within organizations to set targets, justify new projects, defend management actions, and to discover new best practices. These are all good and noble reasons to benchmark. One of the most overlooked benefits of benchmarking, however, is the role it plays (or should play) in your DQ process. I can’t tell you how many meetings I’ve been in where the presence of an external benchmark highlighted a key problem in data collection. Sometimes, seeing your data compared against a seemingly erroneous metric, can show major breakdowns in the data in cases where they would have otherwise gone undetected. Using comparisons to highlight reporting anomalies can be a very valuable use of external benchmarks.

5. Establish a DQ process-

It would be nice if all data were collected in an automated manner, where definitions could be hard-coded, and “what to include” would never be in question. But in most companies, that is simply not the case. Our research has shown that over 50% of data used in our performance management process is still collected manually. But very few of these companies have a defined and auditable process for doing so. This does not have to be complicated, as there are some very useful tools emerging that help collect, validate, approve, and publish required data, just as there are for data reporting and score-carding. Having a process, and system to ensure that process is followed, are both critical elements in data collection, and hence make for very good investments.

6. Don’t forget the Culture –

As I said above, most data, for the time being, will be collected in a manual fashion without fancy IT infrastructure. People will still be at the heart of that process. Invest time in helping them see the importance of the information they are collecting, how that information will be used, and what process will be followed to do so. Many organizations spend tens of millions on a systems solution to what is largely a people/ cultural problem. Investing in training and coaching can be as high payback as those mega systems investments.

* * * * * * * * * * * * * * * * * * * * * * * *

So as you navigate through your internal data collection efforts, try and keep these tips in mind. Sometimes, it’s the simple “blocking and tackling” that can make the difference between winners and those in second place.

 

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com


Managing Those Elusive Overheads

One of the biggest challenges faced by operations management is how to improve costs and service levels, especially when such a large portion of these costs are perceived to be “outside” of their control.

Despite recent attempts to control corporate overheads, it’s still very common for corporations to laden operating management with an “automatic” allocation for overhead costs such as Employee Benefits, IT, Legal, Facilities Management, Accounting…the list goes on. Our studies show that most of these costs are still allocated back to management as a direct “loader”, or percentage markup, on staff that is employed in the operating business units. Not only is this an unfair disadvantage to operating management who has little perceived influence on these costs, but it also results in a “masking” effect as these costs mysteriously get buried in the loading factor itself. Operating units struggle from year to year, trying to capture that next 1,2, 5 % of efficiency gains, while over 50% of their costs are, in effect, off limits.

But there are some organizations that clearly understand the challenges, and have begun to make nice strides in this area of corporate overheads. For some, it has involved ugly corporate battles, political in- fighting, and the “muscling in” of allocation changes. For others, the challenge has been a bit easier, by focusing on what really matters- visibility of overheads, and a direct path toward managing them.

Here’s a quick list of areas you can focus on to improve the way overheads are managed:

Transparency– The first, and most important driver for successfully managing overheads is making them visible to the enterprise. All to often, overheads from shared services functions are not visible to anyone outside of shared services organizations themselves. In fact, the word “overhead”, has an almost mystical connotation- something that just shows up like a cloud over your head.

One of my clients once said, “The most important thing leadership can do is to expose the ‘glass house’. Overheads need to get taken out of the “black box” and put into the “fish bowl.” Once you can see the costs clearly, both operating and corporate management can begin making rational assessments about to best control them.

Accountability– This is arguably one of the trickier overhead challenges, since managing overheads involves accountability at multiple levels. To simplify this challenge, most companies simply define accountability at the shared service level (VP IT, or VP Legal, for example) and leave it at that.

More successful organizations, on the other hand, split this accountability into its manageable components. For example, management of shared services functions can be accountable for policy, process, and the manner in which work gets performed. But there is a second layer that deals with “how much of a particular service” gets provided- and it’s that component that must be managed by operations, if we are to hold them accountable for real profit and loss (discussed below).

To do this right requires some hard work on the front end to appropriately define the “drivers” of overhead costs that are truly within line management’s control. A simple example is the area of Corporate IT, in which the IT department defines overall hardware standards and security protocols, while the variable costs associated with local support is based on actual usage and consumption of IT resources. That’s an overly simplified example, but still illustrative of how the process can work. Most overhead costs have a controllable driver to them. Defining those unique drivers, and distributing accountability for each will go a long way in showing how and where these costs can be managed.

“P&L” Mindset– There’s been a lot of debate around whether shared services functions can truly operate like real profit centers. The profit center “purists” will argue that internal services should behave just like “best in class” outsourcers, and if they can’t compete, they should get out the way. The more traditional view is that once a service is inside of the corporate wall, they become somewhat insulated from everyday price and service level competition. The reason being that “opening these services up to competition” would be too chaotic, and ignore the sunk cost associated with starting up, or winding down one of these functions.

A more hybrid solution that I like is to treat the first few years of a shared service function like a “business partnership” with defined parameters and conditions that must be met for the contract to continue. It takes a little bit of the edge, or outsourcing “threat”, off the table, and allows the operating unit and shared service function to collectively work on solving the problems at hand.

Still, shared services functions must look toward an “end state” where they begin to appear more and more like their competitors in the external marketplace and less like corporate entitlements. In the end, they must view their services as “universally contestable” with operating management as their #1 customer. For many organizations, particularly the larger ones, that’s a big change in culture.

Pricing– Save for the conservationists and “demand-siders”, most modern day economists will tell you that the “price tag” is the way to control the consumption of almost anything, from drugs to air travel. And it’s no different in the game of managing corporate overheads.

Once you’ve got the accountabilities squared away, and you’ve determined the “cost drivers” that are controllable by operating management, the price tag is the next big factor to focus on. One of the most important pieces of the service contract you have with operations management is the monthly invoice, assuming its real and complete. It needs to reflect the service provider’s true cost, not just the direct, or variable costs of serving operations. Otherwise, it’s a meaningless number. In the end, the pricing mechanism needs to be something that can be compared and benchmarked among leading suppliers of a particular service. For that to be possible, price needs to reflect the true cost of doing business.

Value Contribution– So far, we’ve only focused on the cost side of the equation. Now, let’s look at service levels.

For the more arcane areas of corporate overheads, where a pricing-for-service approach is more difficult, it is usually worth the time to understand the area’s value contribution to your business unit. Finding the one or two key value contributors is now the task at hand. For example, in US based companies, the Tax Department is generally staffed with high-end professionals, and often is the keeper of a substantial tax attorney budget. When treated from a pure cost perspective, a common rumbling among operating management becomes: Why am I paying so much for my tax return?

A better question would be: what value am I getting for my money? In this case, taking advantage of key US Tax code provisions can be expensive, but the cash flow impact (in terms of lower effective tax rates) can be a significant benefit to the operating unit. Clearly delineating and quantifying the value, combined with presenting an accurate picture of the cost to achieve that value (OH charges from the Tax department) can bring a whole new level of awareness to these types of overheads.

Of course, for this to work, you need to ensure that parity exists between the function benefiting from the value generated, and the function bearing the costs. So before you allocate costs, make sure you effectively match the budget responsibility with the function who ultimately reaps the benefits you define.

Service level agreements-This is the contract that manages the relationship between you and your internal service provider. It contains everything from pricing, to service level standards, to when and how outsourcing solutions can and would be employed. There must be a process in place to negotiate the standards, bind the parties, and review progress at regular intervals. While this can be a rather time consuming process (especially the first time out of the gate), it is essential in setting the stage for more commercial relationships between the parties.

Leadership– As with any significant initiative, competent and visible leadership is key. A good executive sponsor is key in getting through the inter-functional friction, and natural cultural challenges that will likely emerge during the process. Leadership must view controlling overheads as a significant priority, one that makes the enormity of the problem visible to both sides, and effectively set the “rules of engagement” for how to best address the challenges at hand. Without good leadership, the road toward efficiency and value of overheads becomes much more difficult to navigate

——————–
So there you have it…my cut at the top ingredients in managing corporate overheads and shared service functions. The road is not an easy one, but if you build in the right mechanisms from the start, you will avoid some of the common pitfalls that your organization is bound to face in its pursuit of a more efficient overhead structure.

b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com


2005 PM Weekly Archives

Over the past few weeks, some of you have inquired about whether we can publish an index of past articles in one consolidated document or file.

That’s something we’ve been considering for some time. However, in case you are not aware of it, we do currently have a site serving as a consolidated index of all past articles. The index can be accessed at :

http://www.totalperformancemanagement.com/pmweekly-index.htm

From there, you can access any article simply by scrolling through the titles.

During 2006, our plan is to enable a variety of search features and topic links, as well as an annual hardcopy version. We hope you continue to enjoy the column, and as always welcome any feedback you may have throughout the year.

Happy New Year, and best wishes for ’06!!!

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

 

“He’s Makin’ a List, and Checkin’ it Twice…”

Oh how many times have we heard that little jingle in the last few weeks?

I’ve got to tell you, that little ‘diddy has come in soooo handy with my 8 and 10 year old sons over the years, I don’t know what I’d do without it. Something about Christmas time, that chubby old bean counter, and his insistence of good behavior, that tends to keep those little guys on the straight and narrow over the holidays. If only we could keep it going all year long.

Since we’re all in the holiday spirit, I thought I’d use this week’s column to reinforce Santa’s message of accountability, at least for us grown up performance managers out there. I should warn you, however, that I’ve kept this one a little “light” and “fun” since most of us tend to be distracted this time of year with more important things like family and loved ones. Nevertheless, it should drive home some key points we’ve been making all year.

For starters, it’s worth acknowledging that Santa has clearly mastered the art of generating good behavior. And as I’ve watched him year in and year out, he appears to only get better at it with time. I can only conclude that he is a great student of the PM discipline, constantly learning from others, and applying these best practices to his Northern Operation. Santa has clearly learned from the best, and so should we.

A few weeks ago, I referenced a speech by David Walker, the Comptroller General of the US, and head of the General Accounting Office – the GAO, which is incidentally being transitioned to be called the Government Accountability Office- a much more appropriate name for this important government function. His overriding message was that performance is maximized when:

1) there are clear incentives for doing the right things,
2) there is transparency of information so that employees know when they are doing the right things, and
3) there are clear accountabilities and consequences when people do the wrong thing.

While Saint Nick only shows his face once a year, he does exemplify these three key principles quite well. At Christmas time, kids prepare their lists- the incentives if you will, for what is likely to happen if Santa concludes they have done the right things most of the time. Santa also has an extensive network of helpers, including billions of parents who help translate these expectations and let those little ones know when they happen to veer off course (i.e. -transparency of performance information). Furthermore, there are those constant reminders us parents give in the way of “time outs” and punishments if our little guys don’t get back on track quickly. And while I’ve never experienced it first hand, there are those horror stories we’ve all heard about the stockings full of coal.

One of these days I’ll have to arrange a “best practice” site visit to the North Pole to see this stuff first hand. How does he keep track of all of those performance reports? “Checking it twice” has got to be a huge undertaking, but somehow it all gets done right since I haven’t heard of any North Pole Enron’s, WorldCom’s, or Tyco’s lately. And there is certainly no shortage of rewards for good performance- the plethora of toys and games that magically show up every Christmas Eve. Yup, this is definitely a business model worth exploring.

So as we prepare our organizations, systems, and processes for 2006, let’s take a page out of Santa’s playbook and focus on these three key elements of performance management. I’m sure if we do, 2006 will bring us a much stronger PM process, better and more consistent performance results, and the good fortune that often comes with it.

I wish all of you the best this holiday season, and remember to keep an eye on that chubby old guy from the North. He’s likely to teach us some more great lessons in the days to come.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com

 

Managing Performance in an Outsourced Environment (Are our Corporate IT systems up to the challenge?)

Let’s face it, competitive outsourcing is here to stay. We don’t have to agree with it politically, emotionally, or theoretically…it’s just a fact of life in today’s business environment…. which begs the question whether our performance management process and systems are up to the task.

For all that has been written about the practice of outsourcing (and there’s no shortage of writings in this space), precious little has been said about if and how our PM processes and systems will need to change in a heavily outsourced environment. Perhaps this is because many companies still see an outsourcing relationship as just another vendor to be managed- a key vendor or strategic partner perhaps, but a vendor relationship nonetheless. But is it really that simple? To answer this question, it’s worth looking at a couple of key aspects of performance management that has shaped this landscape in recent years.

On one hand, there is the reality of outsourcing, and the overwhelming complexity of dealing with an overextended network of information flows, many of which will ultimately exist outside of your corporate information portfolio. On the other hand, we’ve had the significant growth of ERP and other corporate wide reporting systems- a IT “wave” that is replacing our legacy mainframes with the latest and greatest in enterprise reporting technology. The operative word here is “enterprise”- and what that word really means to the future of performance management.

While the wave of ERP systems has driven some well needed perspective and improvements to our performance reporting environment, it has also created a level of “structure” that may be difficult to maintain in tomorrow’s business environment. The reality is that hundreds of millions of dollars has been spent in this transformation, an investment that could soon end up in our museum of IT history if we are not careful. Outsourcing poses the biggest risk in this arena, as it will quickly challenge the very structure that these latest and greatest corporate applications set out to achieve.

Let’s look at a typical outsourcing context.Take a function like facilities management…stuff like corporate security, catering, janitorial services, equipment maintenance and the like- a function that was once one of many departments that make up our internal organization. Only now, this function has become heavily outsourced because of the scale and unit cost efficiencies achieved by shifting these services to a best-in-breed provider (an obvious end state for all “non core” function like this).

On the surface, the outsourcing of a function like this appears to be a significant
“win-win”. That is until the company tries to roll the management of this function into the corporate IT fold. What was once a simple task of rolling up accounting and HR data from internal systems, is now a task that may involve up to 10 different vendors. If the complexity of capturing the costs from this many points of service doesn’t kill you, the process of understanding and normalizing for the differences in data reporting and accounting practices certainly will.

And that’s not the worst of it. The “zinger” in all of this is that you’ve just spent 80 million dollars as a company to develop your “integrated” reporting framework, which, at a minimum will have to be re-tooled to integrate with the myriad of relationships that are now reflected inside of one single outsourced process. That assumes of course, that all of these vendors and partners “play ball” your way- an unlikely reality, to say the least.

If you’re an IT director responsible for the implementation of one of these integrated reporting systems, this is the proverbial train wreck waiting to happen. But don’t jump off that bridge quite yet, because there is a silver lining. That is, if you are willing to challenge the conventional way information is managed.
The answer lies in embracing what some refer to as an “inside out” versus a “top down” information management framework.

So what do we mean by an “inside out” information framework? Let’s start from a different place. Imagine a world where an enterprise is really a large collection of many businesses, all of which can be viewed as independent competitive entities- entities that are assembled in a way that is strategically connected to the vision, mission, and objectives of the corporation.

That’s right…everything from the security guards on the first floor, to the investor relations department on the thirty-fifth. Instead of each of these businesses being given a budget, they are given a clear set of KPI’s, a list of competitors, and a performance contract with clear incentives and accountabilities. They (with some coaching if necessary) determine what information they need to manage their business and achieve their outcomes. They may be given some tools of the trade to manage this information, but the information is their’s to manage.

Conversely, at the portfolio level, leadership defines the outcomes that each of these businesses are to achieve. The portfolio level can be a very small team of individuals, each of whom are accountable for defining what they need, how much of it they need, and the competitive price they’re willing to pay. They have their own dashboards and KPI’s to manage, but they are a lot more focused on outcomes and less on the operational indicators (the “how’s of how the business is managed rather that “what’s” of what they must achieve in terms of outcomes). The operational side of the business (the how’s) is managed in a highly decentralized manner, often by the providers of these services themselves, who are in many cases external vendors and suppliers. Performance Management has become a highly decentralized portfolio management game- a world where the integration of the provider network becomes far more important than achieving that perfect “top to bottom” architecture and warehouse of corporate information.

There are lots of ways to describe a model like this. Some refer to this model as an “Asset Management” orientation where assets are managed separately from the services that construct, maintain, and service them. Others call it a management philosophy of “universal contestability”. Others call it a framework for simply rationalizing and outsourcing services. But whatever you choose to call it, it poses a dramatically different challenge us- one that if not met head on sets up our huge IT investments for failure.

So what specifically needs changing?

For starters, the information needs in the outsourcing context are markedly different, and need to be identified as such. Today, the information needed to guide the outcomes, and run these competitive businesses may not even exist in our legacy systems, and in turn are not likely to even end up in the ERPs themselves. To continue with the Facilities Management example, try comparing a performance report (assuming there is one) of a internal corporate security department with the likes of say Pinkerton (a competitive provider of security services nation wide). They are dramatically different in both design and content.

Next comes the challenge of managing one of these entities, when and if they become outsourced. How much of that information will be needed from the vendor? How much will come from your systems? How will you blend the two when necessary under the likely scenario the data sharing protocols are different?

This is the challenge of integration is far more important than the challenge of aggregation which is often the foundation for most of our corporate systems. We are fixated to some degree on terms like the “cascading scorecard” which by definition sets us up to manage each of these functions down to the work-face level rather than a logical network of relationships between the corporation and its nodal-style network of strategic suppliers and providers.

By applying a more decentralized/ portfolio managed construct to our information needs, we begin to more accurately paint the picture of how our organizations will function in the future, enabling our ERP’s to function effectively at the result or outcome level.

As you implement your PM reporting systems, think small and grow outward. Develop systems to meet the needs of each discrete business-individually at first. It doesn’t mean you can’t use the same software or measurement frameworks and ultimately replicate and link to other business processes and functions over time. It doesn’t mean that you can’t connect these businesses strategically.

In the performance management world, smaller is better, at least to start with. It’s easy to build on successes and link things together over time, as long as you keep the framework flexible and adaptable. Avoid the tendency to have the perfect system, one that looks great on paper but won’t come close to surviving the challenges posed to it over time. The complexity you eliminate will go a long way towards delivering superior information at a fraction of today’s cost.

-b

Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com