It’s a fact that there are many things to measure in the customer experience and there are plenty of ways to gather the data. Jeff Sauro has made up a list that covers most of both online and offline customer experience so you can choose those ones that suit you perfectly well.
Customer satisfaction: Likert scale will help you to survey your users at key points. Find out overall customer satisfaction and lower level satisfaction. Make sure to add the following attributes: quality, speed, cost, and functionality.
Brand attitude: Branding survey will measure affinity, association, and recall.
Brand lift: Measure attitudes before and after participants are exposed to a stimulus.
Customer lifetime value: All customers are different. Measure the revenue, frequency, and duration of purchases by customer and subtract the acquisition and maintenance cost by customer.
Customer expectations: Ask expectations qualitatively in a usability study or quantitatively in a survey. It’s important to have an independent group rate expectations and another group rate the experience (as customers may be affected by the memory of their expectation ratings).
The things customers do the most: Conduct a top tasks analysis by having a qualified sample pick their top five features in any website or application.
What delights customers: Consider the Kano Method by asking customers how they’d feel if a feature was included and how they’d feel if it wasn’t included.
What features are most important: Carry out a key-drivers analysis after surveying customers on key features and emotional aspects of a product or experience. You should have one variable you want to optimize.
Value of a feature: Conjoint analysis will help you to see the value of each feature and the ideal combination of features.
What price to charge: A choice-based conjoint analysis allows to understand the tradeoff between price and features.
Response time: Call hold time, website loading times, and delivery times are some of factors that play a role in satisfaction and loyalty. Automate the data collection or systematically sample transactions.
Technology acceptance and usefulness: 20-item Technology Acceptance Model (TAM) questionnaire will show you whether customers or users find an application’s features and experience both usable and acceptable.
Where website visitors click first: Use a first click test or run a tree test. If customers’ first click is the right one on a website, they are around nine times more likely to find the right information!
If your users notice design elements: An eye-tracking study shows where participants’ eyes go. It will help to see if people react appropriately, and ask if they noticed elements.
Comprehension: Use a mix of recall and recognition questions after having participants view images or videos or read copy.
Measuring recall: Ask some participants to list features, brands, companies, names and so on using an open text box in a survey. Recall suggests stronger memory than recognition.
Measuring recognition: List a set and have customers pick what they recognize for brands or products (include distractors). Recognition suggests less salience than recall.
Icons: Test the icons using context and no context with qualified participants.
What terms to use: Ask your customers. Use an open card sort, or just open text fields in a survey.
Ease of use: Carry out a usability test. It’s the most helpful test for revealing most of the obvious issues.
Efficiency: Measure the time users need to complete tasks in a usability study. You can also use Keystroke Level Modeling (KLM) to calculate skilled error-free task times using screenshots.
Task difficulty: Ask how difficult customers find a task immediately after they attempt it using the Single Ease Question (SEQ).
Overall system ease: To assess the overall impression of a product’s usability, administer the System Usability Scale (SUS) in a survey or immediately after a usability test.
Website quality: The SUPR-Q after a usability test helps to measure the perceived usability, loyalty, trust, and appearance.
How your website compares to the competition: A competitive usability benchmark test and additional questions about loyalty and brand attributes.
Improvement in conversion rates: An A/B test will help to find out the proportion converting for statistical significance.
Search effectiveness: Search engines are both the first and last resort for customers to finding things on website. Test the accuracy of the search results and clarity of the Search Engine Results Page (SERP) using a targeted usability test.
Reliability of your methods: A combination of reliability metrics is a great thing: inter-rater, test-retest, parallel forms, and internal consistency reliability to see how consistent your data is.