Qual Power
Blog Home All Blogs
Search all posts for:   

 

View all (62) posts »
 

Two Ways to Quantify User Experience

Posted By Lauren Isaacson, Tuesday, February 19, 2019
Updated: Friday, February 15, 2019

Quantify User Experience

A friend of mine is a designer who has worked with various divisions of the government of Canada. She told me about working with one particular department. She would show them potential design improvements to existing websites based on qualitative usability tests and they would invariably come back with the question, "How do you know it's better?"

Indeed, how does one know for sure a new website is better than the existing version? As researchers, we know the answer — benchmarking data. However, what's the best way to benchmark the usability of a system? Two methods are commonly used by UX researchers:

  • System Usability Scale (SUS)
  • Single Ease Question (SEQ)

System Usability Scale (SUS)

SUS is the most widely used and documented of the two options, with references in over 1,300 articles and publications. It's also free and applicable to pretty much any piece of technology. SUS consists of 10 questions, all using the same 5-point scale.

1 Strongly Agree/2 Agree/3 Neutral/4 Disagree/5 Strongly Disagree

  1. I think that I would use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this systemwide well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

The numbering of the questions is essential for calculating the overall score. For odd-numbered questions, subtract 1 from each response and subtract the responses from each even-numbered question from 5. This should leave you with a final score between 0 and 40. This score is then multiplied by 2.5 to increase the range of the score to 0 to 100. This final number is a score and should not be confused with a percentage.

Lucky for us, the good folks at Measuring U have analyzed the responses from 5,000 users evaluating 500 websites and have come up with a grading system to help interpret the scores:

  • ~85+ = A
  • ~75 - 84 = B
  • ~65 - 74 = C, 68 is the average score
  • ~55 - 67 = D
  • ~45 or under = F

If you would like a more official and accurate grading system, you can buy Measuring U's guide and calculator package.

Single Ease Question (SEQ)

The other method is SEQ. Single Ease Question is less commonly utilized and has no documented standard wording, but it has the advantage of being much shorter than SUS. I am always in favor of making surveys shorter. SEQ consists of one question rated on a 7-point scale covering ease of completing a technology-enabled task. Like SUS, it is also free and applicable to almost any piece of technology.

  • Overall, how difficult or easy did you find this task?
    • Very easy
    • Easy
    • Somewhat easy
    • Neutral
    • Somewhat difficult
    • Difficult
    • Very difficult

Because there is no documented standard wording of the SEQ, you can tailor the question to cover the metric your stakeholders are most concerned about — confidence, speed, usefulness, etc. The SEQ also pairs very well with unmoderated usability tests often used by researchers who need quick feedback on interfaces.

Measuring U found the average scores across multiple websites to be about 5 (Somewhat easy), but this system is less documented than SUS. Therefore, use it to compare the before and after of a redesign, but not against other sites as you can do with SUS. If you're looking for more than just benchmarking data, you can also add two open-ended questions to the SEQ without risking excessive length.

  • What would make this website/form/app/system better?

Alternatively,

  • What is something you would fix on this website/form/app/system?

These voluntary open-ends give respondents the opportunity to offer their suggestions about what is wrong with the system and how they might make it better. It provides the potential to understand the “why” behind the data.

In the end, by using either of these UX survey question sets before a system redesign is launched and after, you will be able to tell your stakeholders if a redesign is indeed an improvement over the old, and how much better it is.

Sources:

Lauren Isaccson

Lauren Isaacson is a UX and market research consultant living in Vancouver, British Columbia. Over her career she has consulted for various agencies and companies, such as Nissan/Infiniti, Microsoft, Blink UX, TELUS Digital, Applause, Mozilla, and more. You can reach her through her website, LinkedIn, and Twitter.

Tags:  data  QRCA Digest  qualitative research  user experience 

Permalink | Comments (0)
 
Contact Us

Qualitative Research Consultants Association
1000 Westgate Drive, Suite 252
St. Paul, MN 55114

phone:  651. 290. 7491
fax:  651. 290. 2266
info@qrca.org

Privacy Policy | Site Map
© 2019 Qualitative Research Consultants Association

This website is optimized for Firefox and Chrome. If you have difficulties using this site, see complete browser details.