The FT Word
The FT Word is a free monthly newsletter with support management tips. To subscribe, send us email with your email address. The subscription list is absolutely confidential; we never sell, rent, or give information about our subscribers. Here’s a sample.
Welcome to the November 2010 edition of the FT Word. Please forward it to your colleagues. (They can get their own subscription here.)
Topics for this month:
- Measuring case complexity – hopeless? useless?
- Technology spending – should you go with the crowd?
- As always, an invitation to attend the upcoming Third Tuesday Forum breakfast, which will welcome Sallee Peterson of SupportSpace on November 16th.
Measuring Case Complexity
Many thanks to Phil Rogacki for suggesting this topic, and more thanks to Dave Winpenny and Randy Stackhouse who sent comments about a similar topic.
In most support organizations, some cases are more equal than others. Some are easy, FAQ-type queries that any support engineer can resolve without much thought or effort, while others are fiendishly complex, and others somewhere in between. So many support organizations, over the years, have tried to develop ways to measure case complexity. Here are some selected approaches:
- Add up the number of activities per case. Generally speaking, complex cases include more activities (emails back and forth, research notes, and the like). On the other hand, some support engineers are simply chattier than others and will tend to build cases with lots of activities but not really much more effort, or complexity for that matter. Further, if you start measuring performance based on activities, and especially rewarding performance based on activities you can count on a suspicious increase in the level of activity without any matching increase in actual productivity… So this method works best behind the scenes and after the fact, for instance to assess differences between products. And by the way, attempts to refine this approach by weighing phone calls more than emails, say, are doomed to failure.
- Consider elapsed time to resolution. Complex cases do take longer to resolve – but unfortunately there can be many other, unrelated reasons why cases stay open for a long time, including customers who drag their feet and support engineers who forget to close cases, so this is not a great approach.
- Track effort time. This was a topic in last month’s newsletter and we highlighted that few support-tracking tools enable automatic or even easy case tracking, sadly, so the effort required to track effort time is likely to be large while the outcome is easy to manipulate.
- Capture escalations. Complex cases are more likely to require consultations with an expert or even handoffs to a higher level. But consultations and handoffs could be triggered by other causes, such as inexperienced case owners, and on the other hand they could be avoided by support engineers who want to appear especially self-reliant. In any case, escalations are a pretty blunt metric so not too helpful to measure gradations.
- Capture document creation. Complex cases mean that troubleshooting is required, which means that they probably a new knowledge base document will be created for them. Eureka: a case linked to a new document is complex and a case linked to an existing document is “simple”. Not so fast! A case could require loads of troubleshooting only to determine that, in fact, there is a known solution already (we just couldn’t tell at first). And the last thing you want is to create weird incentives to create new documents when an existing one could be improved instead. Not a great approach.
- Do a manual audit. Unlike the suggestions above this one requires real work, but it’s probably the most reliable approach, and if you assign the right specialists to perform the audit it’s actually quite fast. This approach lends itself to a multi-tiered rating system, such as easy/medium/hard, and can be very helpful to gauge differences between product versions. You only need to rate a small percentage of cases to reach good accuracy. (Note that, unlike with many other approaches, you can accurately rate open cases.)
So you can see that rating case complexity is, well, complex! But the larger question is why you would want to capture case complexity in the first place. Here are some typical scenarios:
- To create a staffing model. A crucial ingredient of staffing model is resolution time (effort time, that is), and resolution time, in turn, is determined by case complexity.
- To justify cost per case figures. If cost per case increases, as can happen with (successful!) initiatives such as knowledge management that shrink case load but increase complexity, you may be called to justify why cost per case is going up, and an obvious candidate is case complexity.
- To balance the load between teams. If team A handles 10 cases of complexity level 3.2 and team B closes 15 cases of complexity 2.7, which one did more? Is it a fair arrangement?
- To compare individual productivity. John closed 9 cases and Jennifer closed 11, but John gets the tough cases. Did the two work equally hard?
- To justify the value of support to customers. Customers who are opening just a handful of cases may doubt the value of their investment.
And I would argue that the whole idea of measuring case complexity is not that great of an idea. Here are alternatives for each scenario.
- If you are looking for a number for the staffing model, why torture yourself with case complexity? Simply divide hours by cases closed and you will have your effort time. And yes, effort time can go up over time if products become more complex, or less reliable (it happens!), or you make a good investment in self-service technology. If you want to validate the ratio, simply compare the ratio for top performers with the average. If the top performers’ ratio is twice as much, you’re fine. If it’s 10 times as much, adjust effort time downward.
- If you are trying to justify cost per case increases, use a manual audit. It’s more meaningful (hence accurate) than other methods and it does not distort behaviors the way other approaches can. Also: try to move the debate to cost per customer. It’s a much saner approach!
- For load balancing purposes, I would callously and completely ignore case complexity. Make sure the distribution of cases is as random as you can make it and close your ears to any appeals. Not worth your time. A case is a case if distribution is random enough. Compare more meaningful aspects of case resolution, such as customer satisfaction.
- If you want to demonstrate the value of support to customers, case complexity is probably the last argument to use! Customers don’t care about case complexity – and actually customers greatly value the benefits of no cases at all. They simply want to use the products successfully. Instead, invest in some proactive activities and collect testimonials.
I hope to have convinced you that case complexity is rarely a worthwhile metric. And if you want to learn more about support metrics you can read Best Practices for Support Metrics.
According to the latest TSIA technology survey (2010 edition), the top areas where members are planning to invest in the coming year, with about 30% of members planning to invest, are:
- Forums and communities
- Social service
- Intelligent search
(For comparison’s sake, only 12% are planning investments in incident-tracking tools, but 82% report having an incident-tracking system in place already – which seems really low, if you ask me. Who doesn’t have an incident-tracking system in place?)
Should you jump on communities? I’ve become a fan of communities after working with several successful implementations but I would highly recommend attending to search before tackling forums. Poor search performance is depressingly common, it is a big drag on internal productivity, and it kills self-service. Don’t be trendy: fix the foundation first.
FT Works in the News
Eric Eidson and I are collecting case studies and facilitating dialog amongst vendors who are using channel support. If you want to participate in the discussion, join our LinkedIn group for Channel Support.
Inside Technology Services published an article I wrote entitled Community Metrics: Why Page Views Fall Way Short in its 9/30/10 edition. You can find it here
Call Center Insider published an article I wrote entitled Getting Customers to Love Self-Service in its October issue. You can find it here (it’s a reprint from July 2009)
And back in July , Inside Technology Services published an article I wrote entitled Auditing Support Tools for Long-Term Performance. You can find it here
I also heard from several readers who wanted to find older articles that were no longer available on the web sites where they were originally published. Ask me! I usually keep copies of all the articles I write and, more importantly, I seem to be able to find them again!
Third Tuesday Forum
Are you based in the San Francisco area (or will you be there on Tuesday November 16th)? That morning, David Kay and I will be hosting The Third Tuesday Forum, a roundtable for support executives to discuss the topics we embrace and wrestle with every day. The presenter will be Sallee Peterson from SupportSpace who will speak about The Expert Solution: Support and the Beauty Salon Meme, using a flexible, home-based workforce
To register or for more details, click here. Space is strictly limited to ensure an interactive session.
If you cannot make it this time but would like to be on the mailing list, sign up. You will be the first to know about new events. You can also join the Third Tuesday Forum groups on LinkedIn and Facebook.
Curious about something? Send me your suggestions for topics and your name will appear in future newsletters. I’m thinking of doing a compilation of “tips and tricks about support metrics” in the coming months so if you have favorites, horror stories, or questions about metrics, please don’t be shy.
650 559 9826