What is adaptive design and why should I care?

Let’s start with a little bit of context.

Websites can be viewed on desktop and laptop computers (standard-size displays), tablets (smaller screens), and smartphones (the smallest screens). Unless the site has been carefully designed, the entire layout will simply be reduced in size on smaller screens, which can make it difficult to easily read page content, navigate, and scroll. To meet this challenge, two methods of designing and coding websites are gaining wide acceptance:

▪   Responsive design uses a single code base, cleverly designed to scale the website content according to the device’s screen size. Because there is a single code base and just one set of display templates, responsive design makes it simpler to create, test, and maintain the site, and it keeps the user experience consistent no matter the screen size.

▪   Adaptive design involves detailed consideration of each distinct screen size, often leading to development of multiple display templates. The display code detects the device first, and then uses the information to deliver the specific page content and navigation for that screen size, including both functionality and layout. For instance, it is possible to allow smartphone users to check on case status but not download software updates, although both functions would be available from a desktop.

Which one is better? It depends! If your web metrics show that your mobile visitors are using the website in very different ways than desk-bound users, it makes sense to invest in adaptive design. (And you can create an adaptive design for an existing website without changing the existing code, which is handy for a massively complex site that you don’t want to disrupt.) On the other hand, if users want to perform similar tasks on all kinds of devices (or you do not have web metrics showing different tasks being used more on different devices), you are better off using responsive design techniques.

With more users than ever accessing your support website from mobile devices, it’s important to deliver a great experience to them. We can help you decide whether your site would benefit most from responsive design or adaptive design — and help you (re)design it appropriately.

Click here for a description of our support website design services. Click here for an article that describes responsive design in more details (you will access a non-FT Works website).

Does your site use adaptive or responsive design? Please post a comment.

Freemium Strategies

This post is inspired in part by an article in the May 2014 issue of Harvard Business Review entitled Making Freemium Work.

Many support organizations support trials, and of course the goal of trials is to entice users to move from free to paid. Here are a few points for those of us in Customer Success organizations that are concerned about customer lifecycle.

  • Beware of a very low conversion rate (the percentage of free users who upgrade to paid). It means most users are happy with free so you are either giving away too much, or doing a poor job of explaining the benefits of the paid features.
  • Beware of a very high conversion rate. Perhaps the free version is just not very compelling so users just have to move up. It may look good to have a high conversion rate but it also mean that users may not be drawn to the free product in the long run.
  • Your mileage will vary. In a startup, conversion rates are typically higher because early adopters are more willing than others to adopt and pay for the service. Once the early adopters are taken care of, the conversion rate will drop. That spells trouble if the effort required to take care of unpaid users is high. Perhaps a time-limited trial is a better fit for you.
  • Free users can be profitable if they serve as references. Sure, they may not contribute directly to revenue but their referrals might. Are you tracking the source of referrals?

Are you supporting a freemium product? Please share your experiences.

 

Do I get enough customer surveys back to be able to trust the results?

I hear this apparently simple question on a regular basis – and I thought I would invite my colleague Fred Van Bennekom, the guru of practical customer surveys, to share with us his wisdom and recommendations. Fred says:

“Everyone conducting a survey is concerned about response rates and the level of confidence they can place in the survey results, and in conference presentations I get asked many questions that show how misunderstood survey accuracy is. (Hint: don’t listen to what your marketing team says.) Survey accuracy does require some fundamental understanding of statistics. In my Survey Design Workshop, I spend considerable time on this topic with a fun exercise using M&Ms to explain “sampling error.”

Here’s an obvious statement: the more completed surveys you get, the greater the confidence. Four factors determine the statistical confidence or accuracy.  But unfortunately, the required sample size is not just a simple percentage.  Statistical accuracy is determined by four factors:

  • Size of the population. The population is the “group of interest” for the survey. (For instance, all the customers who create a case in February, 2015.)
  • Segmentation analysis desired. Typically, we analyze survey data along some demographic segmentation. For instance, if you analyze the data by support rep, you need enough responses for each rep.)
  • Degree of variance in responses from the population. This factor is the hardest to understand. If the respondents’ responses tend to be tightly clustered, then we don’t need to sample as many people to get the same confidence as we would if the responses range widely.  To be safe, we use the worse case variance in our calculations below.
  • Tolerance for error. How accurate do you need the results to be? If you’re going to make multi-million dollar business decisions, then you probably have less tolerance for error.

The sample size equations here are a bit daunting. (Check your statistics book.)  I created this chart to make this more understandable.

SurveyAccuracy

The horizontal axis shows the population. The vertical axis show the percentage of the population from whom we have a response. (This is not the response rate. The response rate is the percentage of those receiving an invitation who respond. Note the critical distinction if you do not send surveys to all customers.)

The chart shows seven lines or curves that depict seven levels of accuracy. The horizontal line at the top shows that, if we perform a census and everyone responds, then we are 100% certain that we are 100% accurate.  Of course, that will likely never happen.

Before I explain how to interpret the curves, let’s bring out a couple of points from the chart. First, as the percentage responding increases, the accuracy increases. No surprise there. Second, as the size of the population grows, the percentage responding needed for the same level of accuracy decreases. Conversely, when we have a small population, we have to talk to a larger percentage of the population for reasonable accuracy.

Now let’s interpret those curves. Each curve shows 95% certainty of some range of accuracy. The 95% is chosen by convention. Let’s focus on the accuracy part of the statement.

Say you have a population of 1000, and you sent invitations to 500 people. Half of those responded. So, 25% of the population responded. Find the intersection of 1000 on the horizontal axis and 25% on the vertical axis. You would be approximately 95% certain of +/-5% accuracy in your survey results.  If we conducted this survey 20 times, 19 out the 20 times (95%), we would expect the mean score to lie within +/-5% of the mean score found when we conducted the survey.

Conversely, if we have an accuracy goal for the survey project, we can use this chart to determine the number of responses needed. Say, we have that population of 500, and we wanted an accuracy of +/-10%. Then we would need about 18% of the population to respond, or 90. (Find those coordinates on the chart.)

When we actually conduct our survey and analyze the results, we will then know something about the variance in the responses. The confidence statistic incorporates the variance found in each survey question and can be calculated for each survey question. The confidence statistic tells us the size of the band or interval in which the population mean most likely lies – with a 95% certainty.

For a more extended discussion of this topic, please go to: http://www.greatbrook.com/survey_statistical_confidence.htm

And if you’d like to request an Excel response rate calculator, use our request form: https://ww03.elbowspace.com/servlets/cfd?xr4=&formts=2006-11-10%2011:40:01.296001

 

Thank you Fred!

As a reminder, Fred is offering a $200 discount on his upcoming workshop in San Francisco on February 24-26. Sign up now here (use the discount code FT Word)! You can find more information about the workshop here.

 

 

The FT Word – February 2015

The FT Word

The FT Word is a free monthly newsletter with support management tips. To subscribe, click here. The subscription list is absolutely confidential; we never sell, rent, or give information about our subscribers.

Welcome

to the February 2015 edition of the FT Word. Topics for this month:

FT Works in the News

Introducing Smarter Service Cloud Implementations 
For years,  FT Works has helped customers implement Salesforce as a support-tracking tool. With the addition of Salesforce-certified resources, we are launching Smarter Service Cloud Implementations, a service to help you implement Service Cloud or improve an existing implementation.

To every implementation, we bring our deep knowledge of the support industry, something not every Salesforce implementer can provide — in addition to certified Salesforce resources. This means your implementation will be effective, not just efficient.

You can find more information here, or contact me to discuss your specific requirements.

Grab the last hard copies of Collective Wisdom 

Only about 100 copies are left. Grab one now. (Ebooks will be available forever!)

A New Book (Chapter)

How Companies Succeed in Social Business: Case Studies and Lessons from Adobe, Cisco, Unisys, and 18 More Brands is now available in ebook and hard copy formats. I wrote chapter 16.

Sign up for the ASP 10 Best Support Website Awards

Get credit for your great website — or receive an evaluation of how you need to improve it from The Association of Support Professionals. I will be a judge again this year. Go here for more details.

Curious about something? Send me your suggestions for topics — or add one in the comments — and your name will appear in future newsletters.

Regards,
Françoise Tourniaire
FT Works
www.ftworks.com
650 559 9826

About FT Works

FT Works helps technology companies create and improve their support operations. Areas of expertise include designing support offerings, creating hiring plans to recruit the right people quickly, training support staff to deliver effective support, defining and implementing support processes, selecting support tools, designing effective metrics, and support center audits. See more details at www.ftworks.com.

Subscription Information

To request a subscription, click here. The mailing list is confidential and is never shared with anyone, for any reason. To unsubscribe, click here.

Back to Newsletter Archive