A Simpler (and Faster) Path to Cx

Regardless of whether you’re a racing fan or not, chances are you’ve seen the below video as it made its social media rounds last year. As a junkie for organizational agility and speed, I never get tired of watching it. And if you haven’t seen it, by all means take a look, as it speaks volumes about today’s topic.

Those of us who spend time in and around Customer Service organizations see our fair share of big investments—new IT deployments, reengineering of back-office processes, upgrading our contact centers … the list goes on. With the introduction of each new Customer Experience (Cx) program, the size of the investment portfolio grows even further through projects like enterprise-wide journey mapping, training initiatives, new service channels, and improvements to research and data analytics platforms. Big projects are a reality for CCO’s and their leadership teams, and justifiably so. Maintaining customer support infrastructure is undoubtedly key to our long-term success.

But are we putting too much emphasis on our customer infrastructure at the expense of the smaller and more actionable practices that could generate more immediate results?

When Smaller is Better…

When asked to describe their Customer Experience initiatives, many CCO’s point to the “small stuff” as being key to the results they’ve achieved. In a world where everyone talks about “Big”—big data, big projects, big commitments—it’s these small, seemingly insignificant, practices, with not-so-small impacts, that are becoming the poster children of their efforts.

I’m talking about practices that don’t require an “act of congress” to implement—the ones that are just good common sense and take next to nothing to implement, except a little foresight and follow-through. Simple and easy, yet, still, an overwhelming number of organizations focus on the big solution being implemented and, in doing so, miss the opportunities to make a difference today.

Consider call/contact centers for a moment, where “big stuff” always takes center stage. How often do we hear “When our new CIS is in place,” “As soon as we implement speech analytics,” “Once we get that new IVR,” etc. But the reality is that most of what we need to make incremental—and sometimes big—changes is already there for those with the creative energy to act on it.

Fix-it-Fridays

A past client (We’ll call her Sarah), one I still regard as a brilliant Customer Service manager, was excellent at demonstrating this concept, i.e., using what she had available at her disposal today, combined with a real action bias to catalyze big change. One of my favorites was a practice she called “Fix-it Fridays.”

During the week, she would mine a few recorded calls for good examples of customer interactions that were “less than optimally handled.” This could mean the rep simply misunderstood the customer’s issue and employed an ineffective solution, or that a good solution was just poorly delivered and/or executed.

shutterstock_178717751 copy

Each Friday afternoon, she would get small groups of reps together (voluntarily, but usually enticed with a bit of free food or cake) to brainstorm better ways of handling these customer situations. They would listen to a sample call together and discuss how the rep handled the interaction. Then they (not the supervisor, QA manager, or trainer, but the front line reps themselves) would talk about how they would approach the call differently. Challenge and debate were encouraged. But it was also a safe and rewarding experience that left everyone, including the rep in the “case study,” feeling better equipped to deliver on their Cx commitment. As this manager used to say, “it’s a little like looking in the mirror when you apply your own service standards to the responses we deliver day in and day out.”

Many organizations use some variation of this in their centers. Nearly everyone has a QA/monitoring process in place (although many place their focus on procedural and policy compliance rather than emerging Cx values and standards). Most have decent follow-up mechanisms for supervisor coaching when problems occur. And (most of the time) when broad themes emerge, they work them into their ongoing training.  But all of this takes time. And, increasingly, such efforts rely on technology and infrastructure to mine interactions, which often means more time and complexity.

Sarah’s approach was focused on “time to market.” It didn’t discount the value of the existing process or the opportunities new technology can offer. Rather, it simply looked for ways to act more quickly. Perhaps, more importantly, she used her weekly forums as a way to teach staff how their Cx standards really were being applied, by immersing them directly, and by letting the team explore those standards in real time. The focus wasn’t on developing new policies or approving new scripts. It was about learning and applying good Cx.

Your reaction to this may be that you achieve these results through your QA process and ongoing coaching. But before your discount Sarah’s practice as run-of-the-mill, ask yourself:

  • How long does it take employees in your organization to act on a solution once it’s identified?
  • Do you encourage bad practices to be changed on the spot, sometimes on the basis of good instinct or common sense, or do all changes have to go through your business improvement processes and protocols?
  • Once a new approach is identified, how quickly is it shared and institutionalized?
  • Do your managers and staff feel empowered to take risks and deploy changes quickly?
  • Are small “experiments” allowed, knowing that most can be “unwound” if they prove to be less effective than anticipated?

yesterday-tomorrow-nike

Examples like this abound throughout our customer service organizations—process fixes,touchpoint improvements, intelligence gathering techniques, and many more. And there is no doubt that the projects and initiatives we have in place to deal with these challenges will lead us to a more consistent and sustainable application of our Cx strategy. But without an equally ambitious focus on the smaller solutions, and a bias from the organization to support them, they simply won’t happen.

Commit today to making the small stuff an equal priority within your company. Ask for it, reward it, and manage to it. The wins may seem small at first, but stack up enough of them and you’ll discover stronger momentum and a faster ROI on your Cx investment.

Bob Champagne is Managing Partner at onVector Consulting. Bob has over 25 years designing and delivering performance management and governance solutions at the Enterprise and Business Unit levels of the organization. Bob can be contacted at bob.champagne@onvectorconsulting.com or through LinkedIn at http://www.linkedin.com/in/bobchampagne 
onVector’s Line of Sight solution suite has been utilized by its client organizations to establish the critical linkages between strategies, initiatives and KPI’s; enabling better alignment, higher levels of performance and a faster path to ROI. onVector’s Line of Sight methodology has been adapted to facilitate the unique management and governance needs of many strategic initiatives across the organization, including Customer Experience.

 To learn more about Cx Solutions available through onVector, including:

  • Cx Readiness Assessments
  • Cx Program Startups
  • Cx Alignment & Standards Development
  • Rapid Touchpoint Renewal
  • Cx Management & Governance Solutions

visit us at http://onvectorconsulting.com/cxsolutions

 

 

Governing Cx through Line-of-Sight

line of sight gearsAn end-to-end approach for managing customer experience strategy and delivering on its promises...

Over the past 24 months, Customer Experience Initiatives (Cx programs, as they have come to be called) have climbed to the top of the radar screens of most leadership teams. Organizations are abuzz with projects to identify “touchpoints,” map “customer journeys,” and strengthen their customer-facing business processes. Alongside these initiatives are even larger investments in acquiring the data and analytics required to feed and sustain these service improvement strategies. >>Next>>

Read Full Article

2014: The Year of Touchpoint Renewal

Screen Shot 2013-12-26 at 7.06.08 PM

CxTAM Industry View
Copyright 2013, onVector Consulting

At its core, Cx is about all the pursuit of delivering  an exceptional customer experience across every touchpoint, every time.

That’s a pretty ambitious goal, and one that I’ve begun to refer to with my clients as the “Cx Holy Grail”.

EVERY touchpoint, EVERY time? Think about it. Every time we buy a product, activate a product, use a product, get support, renew our service, suspend or terminate our contract…and the list goes on…, we must deliver an exceptional experience. Some would even say that our viewing of advertisements, interactions with social media, and even our passive conversations with others about our experiences qualify as a touch points that need to be “managed”.  And they wouldn’t be far off.

So where do we focus first? Which touch points? Which parts of those touch points ? What can wait? What can’t?

One way to simplify the madness is to have a common set of unifying standards that every part of the organization can identify with, routinely. While statements like “Exceptional Cx: Every Touchpoint, Every Time” make for good mantras and vision statements, our Cx program will be short lived unless those statements can be  translated into a clear set of observable, measurable and actionable factors. Without these, you’re literally flying blind with no way of knowing when something is broken, where improvement is needed, or how to fix it. That’s a core principle in managing any strategy, and one that is glaringly missing from most Cx programs today. Our Touchpoint Assessment Model (TAM), and the 12 attributes that comprise it, was essentially constructed to address that gap and and help our clients better focus and navigate their Cx improvements.

TAM in a Nutshell

While  the model is based on a quite a bit of research, client experiences, and some pretty creative crowdsourcing; its structure and architecture is quite simple: 3 key areas of focus comprised of 12 unique and discernible attributes.

The first four dimensions deal primarily with the product or content being served up in the transaction. The second four deal with the process through which the interaction occurs. And the final four relate to the style and delivery of the transaction. Each of the 12 attributes are worthy of separate discussion and exploration which I’ll cover in subsequent posts. But for now,  here are the highlights.
An Exceptional Customer Experience requires that the:
 Content or Product is:

  • Relevant to the specific transaction, persona and context at play
  • Useful in serving its intended purpose
  • Reliable and consistent in its delivery
  • Value accretive (we’ll explain this more later, but suffice it to say, it’s the “differentiable stuff” (smart value) that gets noticed)
Delivered through Processes and Mediums that are:
  • Crazy simple
  • Responsive to the required or desired outcome of the transaction
  • Efficient and free of waste (“my time” and “your’s”)
  • Transparent when they need to be
With an accompanying Style and Tone that is
  • Inviting and engaging 
  • Real and Authentic
  • Appropriate to the context and customer circumstance
  • Helpful and resourceful

Within each of these 12 attributes are corresponding definitions, metrics and practices that paint the full picture of what is required to achieve what we would call “best practice”. It’s a model that has been constructed over 36 months of research and client experiences, along with a healthy dose of reader input and perspective. Is it perfect? Of course not. But it does provide a good set of distinctions that help break down where our issues lie and what can be done to begin turning things around in the right direction. What’s profiled in the chart above is how our clients have graded themselves in a recent survey of current Cx program focus. Do we agree with all of these assessments? Probably not. But it does show that most believe there is considerable room for improvement. And after all, that’s the point of all of this.

Throughout 2014, I’ll be posting periodically on different aspects of the model as well as case studies on how our clients are using the framework within their Cx programs and governance processes to drive sustainable change. And as I have in the past, I’ll “pepper” things a bit with my own personal experiences which, as many of you know, are viewed through a pretty critical Cx lens. Taken together,  I believe this input will provide our readers a with a useful perspective from which to measure and strengthen their Cx program.

To all of my clients and colleagues, thanks for a great 2013. I look forward to our continued collaboration in 2014 and the learning and sharing that goes with it.

For more information on the CxTAM, and how it can help accelerate and strengthen your touchpoint renewal efforts, visit our onVector Cx webpage, or contact us at Cx@onVectorConsulting.com

b

Bob Champagne is Managing Partner: Customer Experience Solutions at onVector Consulting Group.. Bob has over 25 years of  experience in Cx and Customer Operations, with emphasis on the global energy and utilities sector. Bob has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at bob.champagne@onvectorconsulting.com or through LinkedIn at http://www.linkedin.com/in/bobchampagne 

Crowdsourcing Through A Crisis

The collapse of a conventional service model, and the rise of a new one…

One of the exciting things about my line of work is that we see lots of new ideas and ways of doing business. Not all of them make sense at the time. Some just have to evolve. But over time, we witness step changes where bleeding edge becomes leading edge. And what causes that to occur is that a brand new “use case” emerges for a solution or technology that has been waiting for its time.

Silence in the dark…

During hurricane Sandy, many of us in the northeast were starved for information. Millions of customers were without power, gas stations were out of service, grocery stores were closed, and just about every part of life as we knew it shut down. For many, it was a “mini Katrina,” making us reflect on what life must have been like in New Orleans in 2005. For others, it was the real thing.

It’s hard to get real upset about things when you look at them through such a lens. In large measure, folks affected by Hurricane Sandy were pretty patient, at least in the early going. But over time, that patience waned and the demand for information escalated—information that never really materialized. For the most part, customers understood they weren’t going to get specifics, but they wanted something. What they got was nothing.

The sum total of the narrative was:

  • “This is an unprecedented event”
  • “It’s not our fault”
  • “We’re all doing the best we can”
  • “It will be a minimum of 7-10 days before things return to normal”

On day one, that may have been an appropriate response (or at least somewhat understandable), but with each additional day, that response created more and more frustration.

A core competency becomes irrelevant…

Ironically, Utility companies pride themselves on service excellence, and for the most part do a fairly good job of it. Most have made dramatic strides in terms of customer experience. Sure, there is a lot more they can do, but for the most part, the nature, speed, and quality of service have all improved. Most have invested significantly in upgrading core service and delivery channels (Call Center availability, CRM technology, metering and billing systems), and have expanded the range of service options (web, mobile, kiosks, etc.)

And as you might expect (since it is one of their biggest drivers of satisfaction), most have gotten better in terms of outage notification, communications, and restorations. Most major utilities have online outage maps available that provide location, current status, and restoration times. Infrastructure has gotten better with expanded use of distribution automation and switching technologies. The logistics of managing restoration efforts has improved. And customer communications (both proactive and reactive) have expanded. After all, it is (and should be)a core competency.

But eight weeks ago, none of this really mattered. Sandy was an unprecedented event. The foundation for most of the improvements referenced above was compromised by the storm. You can only provide information you have, and the damage assessments had only just begun. Even if you possess the technical and informational resources required to provide updated outage maps, doing so assumes that customers can get online to view them. Most outage notifications require a phone call. Most phones today are VOIP and require power to function. Ironically, the call centers were generally available and functional (the three times I called, the call was answered in less than twenty seconds), but with none of the other parts of the process working, the reps just became another “talking head” for what we already know (or didn’t know)—The storm was an unprecedented event. It was not their fault. They had no information. And it’s a waste of your (and by inference their) time to contact them for more information before 7-10 days had passed.

Electric power is an interesting kind of product. You don’t think about it much. In fact, studies have shown that the typical customer only thinks about their utility 6-9 minutes a year. You could say it’s like “air”—you only think about it when you can’t get it.  And when it’s unavailable, it certainly occupies a lot more of your mind-space. Most complaints and pockets of significant dissatisfaction can be directly ascribed to extended outage situations. And those impressions last a lifetime.

So while utilities have invested heavily in service improvements, one could argue that their systems have been designed for everything OTHER THAN that which poses the most significant risk to satisfaction and loyalty. But again, it’s a bit of a Catch-22. Designing a system for a low-probability but catastrophic event that most likely will render the system itself useless, seems somewhat circular and, ultimately, futile. Or does it?

Customers take matters in their own hands…

In circumstances like this, some customers intellectually understand the position the utility is in. They may not like it, but the more it becomes clear that they are asking for information the company simply doesn’t have and/or can’t provide, they will come to understand that continued efforts to obtain such information are pointless.

During the storm, many customers turned to online news, discussion boards, online forums, and social media for their information with varying degrees of success. What many wanted to know was whether work had begun in their area, whether the source of the problem had been identified, and a better (if not definitive) sense of restoration time. But

for the most part, they wanted to know that repairs in their area had been initiated. Or in some cases, THAT their power HAD been restored (Many were staying with relatives or friends out of town, and this would be their only way to learn that things were back up and running). Last year during a similar event, my wife learned we were restored via Facebook, two days before the utility reported it on the web.

 A solution staring you in the face?

But here’s where it gets interesting. During any such wide-scale outage, there are hundreds of thousands of eyeballs scouring these channels, many capable of providing information the utility doesn’t have access to—localized damage, poles down, safety issues, and plenty more. But the vast majority of this information goes unharvested.

In fact, with Sandy it was just the opposite. My local utility was literally pumping out messages, at times more than three or four per hour. But the sum total of the content was—yes, you guessed it… that it was an unprecedented event; it was not their fault; that they had no information; and, yes, that unnerving seven-to-ten-day restoration prediction. There was the occasional posting of “ice and water” locations (met with sarcasm since temps were often well below freezing). There were some attempts by customers to engage with whoever was providing these messages, but it was always met with some way of saying “we don’t know”.

Then I noticed something interesting. A few customers began hashtagging tweets with specific information about their location. Soon others were replying in kind—simple things like “we have two trucks on our a street.” Then someone else would chime in with an address that had been restored.

One of the utilities in our area (not ours, mind you) even attempted to “coordinate” some of the dialogue between customers—things like “thanks for your question about such-and-such a town, we don’t know the status, but three customers are reporting activity in your area.” That type of coordination was very rare to see, and was only short lived, but someone began to see an opportunity and was willing to act on it. For the most part, though, the majority of the dialog was between and among customers, and the utilities were largely absent from the conversation, save for the repeated banter of … “7-10 days.”

What if?…

OK—extrapolate with me a bit here…

What if the posts and tweets of customers were slightly more structured—like we saw with customers hash-tagging their locations?

What if utility reps played a role in facilitating the conversations between customers and other information providers and consumers?

What if we had systems that could synthesize large volumes of unstructured data that was already out there and actually add value to it?

What if utility workers and restoration crews could post directly to appropriate forums and boards as the work was being done?

In short, what if the customer was actually a participant in providing customer service?

Emerging Models…

Farfetched? Perhaps.
But there are models that are beginning to look exactly like this.

GifGaf , a UK mobile service provider has migrated almost completely to customer-provided customer service. No call centers, no lobbies, just a portal for customers to report issues and actually play a role in responding to and solving problems. There are reward mechanisms where customers can earn points toward minutes or other merchandise for contributing valuable content and solutions. The model isn’t perfect, and some customers would say it’s just another way to save money and push self-service, but if you look a bit more closely, you’ll see that this is a radically different approach.

There is a variety of services (see-click-fix, and get satisfaction) to name a few , that have begun to apply the concept of “crowdsourcing” (a more formal name for aggregating and extracting value from available streams of social content in the provision of services) to the provision of infrastructure and other utility services. In both cases, these companies provide a structured app for customers to report issues, and companies to respond, while tracking progress along the way—everything from fixing potholes to streetlights. Only these services are not completely tied to the company providing the service. It does, however, behoove the service provider to monitor and participate in the resolution process, since the complaints, issues, and resolution have so much visibility.

Take-aways…

I believe we are at an interesting juncture in customer service—one that can not only improve the process and costs of providing customer solutions, but radically change the delivery model.

To me, the take-aways are threefold:

  1. Become a more active participant in your customer communities. Don’t just settle for perfunctory uses of Twitter and Facebook as communication channels. Engage in a value-adding way to the dialogue that is already taking place. Instead of trying to create new followers or get “Liked,” try to join in on active discussions and dialogue.
  2. Change how we think about service infrastructure and technologies. I would say 90% of our systems are based on using information collected from customers by our companies FOR our companies. Instead, think about how we might harness information from the broad range of unstructured data already out there (information already provided by our customers FOR our customers) in the provision of better and more relevant customer experiences.
  3. Focus on the one or two areas where dissatisfaction and loyalty are most at risk. Most companies design their service infrastructures for the average environment, when the larger risk is posed by the anomalous circumstance (the three-week outage, the blizzard that closes ten airports, etc.). That will likely change the tools, technologies and even the customer portals that are used for providing service, as well as who actually provides it.

Some could argue that engaging in this sort of thinking poses a significant competitive threat (by removing you from the process and giving competitors a window into your customer relationships). I would argue just the opposite—that customer communities exist all around us, and many of us are blind to the value it can provide in terms of better, cheaper, and more relevant service.

Instead of viewing it as a threat, look for ways to engage with it and add value.

-b

 


Hitting Your Numbers in 2013

As we said goodbye to 2012 last Monday night, many of us were already thinking about the year ahead. For some, thinking about the future and setting goals for the year ahead is just a natural part of their “wiring”—an annual renewal process, if you will. But for many, it’s a way to declare a fresh start—basking in the glory of the things we achieved last year, saying good riddance to things we didn’t achieve, and making those proverbial “resolutions” on the things we want to improve and our forward looking goals and targets.

Doing the same thing…and expecting a different result

As we all know, no matter what our new year’s declaration of improvement may be, whether it’s breaking a bad habit, adopting a good one, or just improving on something that’s important to us, many would concede that their success rates are fairly modest, with only a scarce few of these resolutions ever making it past the first couple of weeks.

But despite the fact that most achieve far less than what they set out to, we, nonetheless, go mind-numbingly through the same process year after year after year. You could say that the end of the year, and the state of mind that accompanies it (induced or otherwise), makes us a bit Pollyannaish about the future, which, in turn, causes us to overreach somewhat.

Reasonable behavior for a typical human, granted, but is it as reasonable to expect the same apparently irrational behavior pattern from a corporation, whose goals are presumably established in a more thoughtful (and usually sober :) manner. Is it surprising that these goals often realize the same miserable success rates.?

Underachievement breeds underachievement

On a flight home last week I sat next to an individual who works as a planner/scheduler in a petrochemical plant in charge of maintenance practices. For him, one of the key measures of success is simply the percentage of PM’s and CM’s (preventive and corrective maintenance work orders) that are completed as scheduled. For most of us that don’t work in that industry, we would assume the goal to be fairly high, say north of 90%. But as it turns out, the industry average appears to be in the 80% range and at this particular facility, they were struggling to hit 40%!

I see this a lot with my clients, across multiple business processes. In fact, I’d say it’s more of an epidemic than a random set of occurrences. Call centers that plan for particular service levels, but end up in a huge “recovery” mode in the middle of the year based on changes to a handful of base assumptions. Sales targets that need to be dramatically adjusted based on lower than expected conversion rates. Employee churn that seemingly appears out of nowhere.  Not to mention runaway costs and budget overruns in capital projects and initiatives.

Yes, of course, these are business realities that will always occur. Many are unpredictable but can be reasonably well contained with good contingency planning and risk management practices, or by adjusting the portfolio to have an overperforming area compensate for an underperforming one. Either way, we have accepted the fact that there will always be some level of error or slippage in our planning. The key, of course, is to minimize it.

Strengthening your performance plan

It all starts with understanding how poor target setting occurs. Here are a few of the most common breakdowns:

  1. Failure to specify and declare accountability—Many mid- to upper-level managers have a tendency to set goals at only a high level, consistent with what they must accomplish for compensation metrics and bonus payouts. For example, we might set productivity and quality goals for a regional operating group, or a customer contact center, or a production facility, but not “cascade” the measures to the discrete parts of the operation. That causes two problems: 1) accountability remains with the senior manager/executive and never flows down to the level where it can be most directly affected, and 2) the goals themselves are often misinformed, or at least not crafted with the best insight available.  The result—all sorts of end-of-year juggling and balancing to make the sum of the parts hit the target number, which only works as long as there is enough slack to make up for one or more component shortfalls.  It also creates difficulty in terms of understanding and diagnosing downstream problems and trends.
  2. Weak basis/grounding for forecasts—One of the biggest frustrations I hear from executives is their organizations’ ability to produce valid and reliable forecasts. Without a good forecast, it is virtually impossible to set useful and achievable targets. Part of good forecasting is understanding the component parts of the forecast, which we already discussed above. But more important still is the ability to define and understand the drivers of what you are trying to forecast. For example, if we our goal is to forecast service responsiveness in the call center (say, % of calls within an acceptable hold time), we need to be able to understand call volume, staffing capacity, and assumptions about productivity (current levels, expected gains, etc.) at a minimum. Understanding those factors a level or two down the cause-and-effect chain (say at a call type level) would certainly increase the confidence in the forecast. But creating a really robust forecast requires that we go well beyond that and understand the “drivers” of the components themselves—what factors are correlated with the attributes we are trying to forecast and by how much? So what does this look like in practice? Instead of looking at total volume assumptions from the year prior, we actually create a zero-based (bottom-up) forecast based on predictive variables and leading indicators (e.g., change in the volume of local/regional building permits might be used to tweak our assumptions about the volume of new connection call types).
  3. Alignment gaps –-Even with the best planning assumptions and accountabilities in place, there must be strong alignment across the various stakeholders who make up the forecast. That may sound like “motherhood and apple pie” for some of you, but I’ve seen too many cases where Department A makes a change to a business process to affect a certain operating metric without a clue of how that metric might be relied upon in other downstream forecasts. A good example of this is the impact that operational or product changes have on customer service and support requirements. Sure, if we do well in defining the forecast attributes, and cascading accountability, we should be able to minimize some of this risk. But unless we take the time to help our cross-functional managers and peers understand the interrelationships and dependencies between operating metrics and forecasts, there will always exist significant room for surprises.
  4. Weakness in measurement and reporting—Last but not least, is the importance of good measurement and reporting practices that will help identify issues before they become problems that affect the performance of the portfolio or the business as a whole. We should measure not only the operating results, but also the performance against each variable that contributes materially to that outcome, as well as how effectively we predicted and forecasted the nature and impact that each has on our business performance.

At the end of the year, or any reporting period for that matter, we all want to be in a position to declare success on our initial goals for the year. And where we haven’t been successful, we want to at least have had ample opportunity to course-correct to get back on track, or deliberately declare a different target. What we don’t want is to miss the numbers and not know why. Again, sounds like a no brainer, but those kind of questions and blank stares still plague many business and operating executives when it comes to missed performance goals.

Looking at how we performed as an enterprise, business unit, or function is an essential part of managing. But it is equally important to study the effectiveness and consistency with which we set our goals, targets, and forecasts throughout the business, as this will lead to more sustainable performance over the long run.

Let’s make that a goal for 2013.

-b