What is the ROI of our self-service options?

Many thanks to Steve LaRoche and Tony Long for suggesting this topic — and thank you for your patience as I slowly pushed this post to publication!

ROI questions often arise at the start of an initiative, to justify acquisition costs, but they are even more interesting when considered once the initiative is well under way. Now how can we possibly capture the ROI of self-service, be it for knowledge management or social support (forum)?

Idea #1: “Value” is not just monetary.

This may be profoundly unsatisfying to those who believe that hard data is the one and only way to manage a business, but it’s true nevertheless. Can you simply not offer customers any self-service options? I don’t think so: every support website needs some self-service offering. What is the value of monitoring Twitter so as to swoop in when a problem is reported, before it spreads? Priceless, as the old Visa commercial used to say.

Some benefits can and should be quantified in monetary terms, but others cannot. Typical benefits of self-service include: lower case volume, internal productivity improvements (because the support engineers themselves use the knowledge base, or even the forums, to find answers), increased sales, increased customer satisfaction, increased employee satisfaction, and better analytics. Of those, customer and employee satisfaction are essentially unquantifiable in monetary terms, and better analytics may not, either. Don’t try to fit square pegs in round holes: measure what can be measured, and present the rest as important, perhaps essential, but not financially quantifiable.

Idea #2: Measuring what does not happen is an exercise in uncertainty

The main return from an investment in self-service is usually a decrease in traditional, assisted support volume. It makes sense: customer seeks answers on the website; customer finds answer; customer who would have asked for help foregoes expensive 1-1 assistance. But how do we capture that the request for help did not happen?

  • Counting each and every website session as a “deflected” case (what a horrid word!) is obviously overkill. Customers may be trawling for information for which they would not log a case.
  • Using an arbitrary ratio to establish the number of “useful” sessions makes no sense either. Who is to say that every 7th session replaces a case, or every 6th?
  • Try a top-down approach instead. If customers used to log 2 cases, on average, before the new knowledge base, and now log 1.5, you are saving .5 cases per customer per month (which would be high: this is just an example). It could be that the product has gotten much less buggy, or that customers know that service is so bad that they have given up trying! If you can correlate customer usage of the knowledge base and a decrease in case volume, your justification will be stronger.
  • A very clean demonstration of deflection would be to present customers promising knowledge base articles when they log a case. If they discontinue case logging, it’s an excellent sign that you just deflected a case — but the approach seriously undercounts other deflection events (and may well annoy customers!).
  • An alternative is to ask customers whether they found what they wanted when they visit the website, via a popup for instance. If “yes”, count the visit as a success (deflection). You only need to survey a small percentage of users to calculate your deflection ratio.

Idea #3: It’s easier (but not easy) to measure internal productivity

The support engineers use the knowledge base to provide answers faster (and better answers, to boot) and they may well mine the forums for answers. That would be reflected in their productivity metrics. Even small improvements are notable. Remember that, if the self-service options are taking care of easier questions, the support engineers’ productivity may go down as they work on harder issues (but fewer of them!) so consider productivity in terms of customers, not cases.

Note that measuring links between articles and cases cannot yield a financial justification — and may well lead the support engineers to create nonsensical links. However, you can cross-reference productivity and link percentage to see whether the most productive engineers do use the knowledge base more.

Idea #4: Small benefits are not worth measuring

In most cases, case deflection and increased productivity are the main drivers of savings, by very far, and it does not make sense to invest much time or effort measuring the much smaller benefits such as increased sales. I once worked with a cell phone provider who demonstrated significant incremental sales stemming from forum recommendations, so your mileage can vary, but gnerally speaking focus on the intuitively larger benefits and ignore the others.

 

And don’t forget to add up costs. They go well beyond the technology costs — but virtually all well-managed self service offerings create large positive returns.

Change Management vs. Training

Over the years, we’ve had the opportunity to work with several vendors who were adding support channels (adding phone to an existing email channel, introducing chat to replace phone, or adding an in-person option for internal customers) and wanted to organize soft skills workshops for the support team. The workshops often function as the main forum for discussing concerns and process issues about the new channel — so I thought it would be  interesting to explore the pros and cons of training as change management.

Let’s start with the pros.

  • When rolling out new channels, a boost in soft skills is welcome. Support engineers who are comfortable carefully reviewing written messages before composing thoughtful responses often find it very stressful to have to pick up the phone, both because they feel tied to their desk and also because they worry about saying the wrong thing. Training helps! On the other hand, adept phone reps may need to upgrade their writing skills, by a lot.
  • With the new skills comes comfort. During soft skills workshops, we see how relieved support engineers become once they realize that they can (and should) get off the phone to do research, gracefully and productively. A comfortable individual is more likely to embrace change.
  • While training should not serve as the only QA tool for the new process and setup, it is a great QA tool, both during the development of the curriculum and during delivery. A couple dozen support technicians practicing to welcome eager face-to-face “customers” will quickly realize that the sign-up tablet is too small or that the customers who are waiting need a place to do just that, away from the working techs. It could have caught in a standard Q&A session, but (realistic) training will make it obvious.
  • Offering training signals that the change is important and, also, not a fleeting “flavor of the month” change. For the reps, it means that the initiative is serious, and that they are taken seriously.

That said, using training as the sole instrument for change management is problematic.

  • It can be seen as a punishment. Soft skills training is often forced down everyone’s throat during a rollout, regardless of individual strengths or background. This creates serious resentment, often abundantly expressed to the instructor and the hapless fellow attendees, and, worse, it lingers through the rollout. Training everyone helps develop a common terminology and approach, but consider offering more skilled individuals an accelerated curriculum, or setting them up as mentors to others.
  • It cannot substitute for careful defining of goals, processes, and metrics. All the soft skills training in the world cannot make up for deficiencies in these areas. Are you adding chat to cut costs, to meet customers’ demands, or because it’s cool? Are you expecting reps to multitask between chat and phone (hint: bad idea!)? Are you thinking of imposing time limits on chat sessions (hint: see the last hint)? You need to work all that out before the soft skills workshop.
  • It will not fully address concerns about the change. Yes, a good soft skills facilitator can and will calm the team’s fears about handling multiple concurrent chat sessions or getting off the phone when needed — but s/he can do little to address the anxiety about how the new approach will impact individual customer satisfaction ratings or productivity metrics. The managers need to hear the concerns, discuss them openly, and find good solutions to address the likely dip in performance in the short-term, if nothing else.

Bottom line: by all means, include soft skills training when rolling out new channels or making other process changes, but wrap it into a good layer of specific change management strategies.

The FT Word – August 2015

The FT Word

The FT Word is a free monthly newsletter with support management tips. To subscribe, click here. The subscription list is absolutely confidential; we never sell, rent, or give information about our subscribers.

Welcome

to the August 2015 edition of the FT Word. Topics for this month:

FT Works in the News

Last call for reviewers for the Art of Support, Second Edition. Thank you to all of you who volunteered. I’m delighted that we have a solid group of reviewers, and I’m now specifically looking for someone with experience in SaaS support to round out the team. If you’d like to participate,  please contact me! [I have a full team of reviewers now, thank you!]

 

Curious about something? Send me your suggestions for topics — or add one in the comments — and your name will appear in future newsletters.

Regards,
Françoise Tourniaire
FT Works
www.ftworks.com
650 559 9826

About FT Works

FT Works helps technology companies create and improve their support operations. Areas of expertise include designing support offerings, creating hiring plans to recruit the right people quickly, training support staff to deliver effective support, defining and implementing support processes, selecting support tools, designing effective metrics, and support center audits. See more details at www.ftworks.com.

Subscription Information

To request a subscription, click here. The mailing list is confidential and is never shared with anyone, for any reason. To unsubscribe, click here.

Back to Newsletter Archive

Metrics for Knowledge Management – Q&A

On July 8th, Melissa Burch of Irrevo invited me to speak about metrics for knowledge management to a webinar audience.  We were lucky to get many great questions, so I thought I would share some of the answers with you blog readers. Thank you to Jen Diaz for managing the Q&A, and giving me permission to cross-post the answers.

Q: How do you measure case deflection as a result of knowledge?

FT: I wrote a book on this! [Collective Wisdom, co-authored with David Kay] Seriously, it’s a very difficult topic. Depending on the tool you are using, you may be able to present possible solutions to users as they are logging cases. If so, you can measure the percentages of cases not logged. Voila! But note that some, maybe many users may have found solutions and gone away happy without starting to log a case.

Otherwise, you need to have a method for measuring what’s not happening, which is very difficult. I like to simply measure the incident rate, so volume of cases per customer (or per seat, per license, whatever method helps you capture the size of the customer base). If the incident rate goes down when you are improving the knowledge base, that’s a positive result. Of course, incident rate depends on many other factors, most notably product quality… If you have multiple product lines you can check them against each other to eliminate these other factors.

Q: How do you measure quality when the customer needs to go and do some work and only then determine whether the solution worked? They are unlikely to come back and score the item.

FT: Determining the quality of an individual solution is best determined by (1) feedback on the solution itself and (2) reuse during case resolution. The vast majority of customers will not bother rating solutions at all, so be sure to use whatever feedback is given: if one person complains about a solution, chances are that dozens of others also had a problem.

Q: What’s your recommendation for the number of case evaluations per analyst and who should do them?

FT: [I recommend conducting regular case audits on a small number of cases, including whether the proper knowledge management steps were taken on the case, so .] For established analysts, a couple per quarter, randomly chosen, should suffice, assuming that the outcome is positive. (If not, review more to determine whether there is a real issue, or if you just happened to pick problematic cases.) For new hires, or anyone with performance issues, you should review more, maybe all of them for brand-new hires.

As to who should do the case evaluations, I’m a strong proponent of having the analyst’s manager perform them. It’s best to have the same person perform the evaluations, deliver feedback, and manage the performance. That being said, with very technical products it’s often helpful to enlist the help of a senior technical resource who would be better able to assess the quality of the troubleshooting process.

Q: Aren’t case reviews  lagging since it is after the case is closed? How are they leading?

FT: Case reviews are often conducted on closed cases, in which case they do, indeed, come after the fact. But they can be conducted on cases that are still open. Also, not every customer will return a customer satisfaction survey so the case quality review can be considered as a leading indicator of quality, suggesting what customers might say in the future about cases closed by that same individual.It’s not always easy to cleanly distinguish between leading and lagging indicators.

Question: I manage a doc/user assistance team of a brand within a multi-national software company. We don’t have any metrics about users interactions. Where do we start? Is there a good set of books or papers that give us some metrics that we can start managing?

FT: If you have no metrics at all, that’s great because you have no bad metrics. I would suggest starting from the balanced scorecard approach. There are a lot of ways you can look on the metrics. I have some books available that talk about metrics [The edition of The Art of Support will contain an expanded coverage of metrics], and you can also read my blog.

The main thing about metrics is to accommodate both the theory of what you should measure and the reality of what you can measure. Start small. Start with metrics that are meaningful. If you can measure satisfaction at all, that’s a good start. Start with the ideal and accommodate what you can do.

Melissa : In addition to FT’s book, I’d add another book to read; It’s called How to Measure Anything: Finding the Value of Intangibles in Business by Douglas Hubbard. The beginning of the book is very inspirational and makes you think differently about measuring for intangible value.

Q: I’ve talked to numerous support groups who say their corporate culture does not embrace knowledge sharing. How successful can support be in creating a good knowledge sharing culture when executives may not embrace it?

Melissa: Without executive support, you will be able to make some progress in how effectively you can encourage capture of knowledge because in general, most support agents want to help each other and their customers. It’s a much smaller return than you’d see if you had executive sponsorship, but some participation would occur.

FT: I agree with Melissa. What I’d encourage you to do is do active knowledge sharing, preferably using KCS within the support organization. People in support are usually well-disposed to knowledge sharing. It’s not easy, but they understand that it’s important to share knowledge.

The important thing to avoid is being the tail wagging the dog. Start with what you can control within your support group, and hopefully it will spread. Lead by example, but don’t try to transform the entire organization. I have several clients that have tried to do that and three years later they’re still trying to get started because not everyone agrees yet. If they had started where they could, they’d have a system that works for them, and they might have inspired others. Start where you are and then inspire others.

 

If you missed the live broadcast, you can watch the recorded webinar.

And if you have questions of your own, please add them in a comment and I will respond.