Qual Power
Blog Home All Blogs

Two Ways to Quantify User Experience

Posted By Lauren Isaacson, Tuesday, February 19, 2019
Updated: Friday, February 15, 2019

Quantify User Experience

A friend of mine is a designer who has worked with various divisions of the government of Canada. She told me about working with one particular department. She would show them potential design improvements to existing websites based on qualitative usability tests and they would invariably come back with the question, "How do you know it's better?"

Indeed, how does one know for sure a new website is better than the existing version? As researchers, we know the answer — benchmarking data. However, what's the best way to benchmark the usability of a system? Two methods are commonly used by UX researchers:

  • System Usability Scale (SUS)
  • Single Ease Question (SEQ)

System Usability Scale (SUS)

SUS is the most widely used and documented of the two options, with references in over 1,300 articles and publications. It's also free and applicable to pretty much any piece of technology. SUS consists of 10 questions, all using the same 5-point scale.

1 Strongly Agree/2 Agree/3 Neutral/4 Disagree/5 Strongly Disagree

  1. I think that I would use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this systemwide well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

The numbering of the questions is essential for calculating the overall score. For odd-numbered questions, subtract 1 from each response and subtract the responses from each even-numbered question from 5. This should leave you with a final score between 0 and 40. This score is then multiplied by 2.5 to increase the range of the score to 0 to 100. This final number is a score and should not be confused with a percentage.

Lucky for us, the good folks at Measuring U have analyzed the responses from 5,000 users evaluating 500 websites and have come up with a grading system to help interpret the scores:

  • ~85+ = A
  • ~75 - 84 = B
  • ~65 - 74 = C, 68 is the average score
  • ~55 - 67 = D
  • ~45 or under = F

If you would like a more official and accurate grading system, you can buy Measuring U's guide and calculator package.

Single Ease Question (SEQ)

The other method is SEQ. Single Ease Question is less commonly utilized and has no documented standard wording, but it has the advantage of being much shorter than SUS. I am always in favor of making surveys shorter. SEQ consists of one question rated on a 7-point scale covering ease of completing a technology-enabled task. Like SUS, it is also free and applicable to almost any piece of technology.

  • Overall, how difficult or easy did you find this task?
    • Very easy
    • Easy
    • Somewhat easy
    • Neutral
    • Somewhat difficult
    • Difficult
    • Very difficult

Because there is no documented standard wording of the SEQ, you can tailor the question to cover the metric your stakeholders are most concerned about — confidence, speed, usefulness, etc. The SEQ also pairs very well with unmoderated usability tests often used by researchers who need quick feedback on interfaces.

Measuring U found the average scores across multiple websites to be about 5 (Somewhat easy), but this system is less documented than SUS. Therefore, use it to compare the before and after of a redesign, but not against other sites as you can do with SUS. If you're looking for more than just benchmarking data, you can also add two open-ended questions to the SEQ without risking excessive length.

  • What would make this website/form/app/system better?

Alternatively,

  • What is something you would fix on this website/form/app/system?

These voluntary open-ends give respondents the opportunity to offer their suggestions about what is wrong with the system and how they might make it better. It provides the potential to understand the “why” behind the data.

In the end, by using either of these UX survey question sets before a system redesign is launched and after, you will be able to tell your stakeholders if a redesign is indeed an improvement over the old, and how much better it is.

Sources:

Lauren Isaccson

Lauren Isaacson is a UX and market research consultant living in Vancouver, British Columbia. Over her career she has consulted for various agencies and companies, such as Nissan/Infiniti, Microsoft, Blink UX, TELUS Digital, Applause, Mozilla, and more. You can reach her through her website, LinkedIn, and Twitter.

Tags:  data  QRCA Digest  qualitative research  user experience 

PermalinkComments (0)
 

Ditch the script; have a conversation instead!

Posted By Alison Rak, Monday, March 20, 2017

Nobody likes a telemarketer, so why use their techniques in recruiting? Why are researchers still  getting away with putting participants through long, boring, tedious screeners? A conversational approach to your recruit may seem difficult or impractical, but if done well can yield excellent results in the way of highly-qualified, happier participants.

What is a conversational recruit? It’s a way of getting all of the answers to your screener, and then some, through a friendly conversation. There are a few key requirements for success, however. First, you need to be completely aligned with your recruiter on your screening criteria. This typically requires a detailed conversation, backed up in writing, versus just emailing over a screener. Second, you need to trust your recruiter completely that they will not lead the participant, and that they have your best interests in mind. Finally, you need a recruiter who will have a small number of qualified, intelligent people who are well-trained with your project working for you, versus a firm that will put a large number of interchangeable dialers on your project.

Some researchers attempt a conversational recruit by writing a conversational screener, but these fall short. Potential participants can tell when someone is reading from a script and it’s a turnoff. A skilled, conversational recruiter, on the other hand, can knock off a number of screener questions in a brief exchange. Here’s an example of three questions from a typical screener:

First, a written introductory paragraph that, no matter how casual the recruiter tries to make it, will come across as a script and set the tone for the rest of the exchange. Then come the questions:

  1. What age range do you fall into?
    1. under 18 (terminate)
    2. 18-24
    3. 25-34
    4. 35-44
    5. 45-54
    6. 55 or older (terminate)

2. Do you have kids living at home? If so, what are their ages?

3. Do you or anyone in your household work in any of the following industries?

  1. Education
  2. Marketing
  3. Advertising
  4. Public relations
  5. Transportation
  6. Technology
  7. etc. etc. etc.

3. (Articulation question) If you could go anywhere on vacation, where would you go and why?

Now, imagine trying to achieve the same thing through a conversational approach.

After a brief introduction….

Recruiter: Tell me a little about yourself. For example, how old are you, what do you do for work, and who do you live with?

Potential participants: Well, let’s see…. I’m 42 years old, a stay-at-home mom. I live with my husband and two kids, plus a golden retriever who acts like my third kid!

Recruiter: “Oh, I love goldens! How old are your kids?

Participant: My daughter Izzy is four and my son Burke is eight.

Recruiter: Wow, you have your hands full. What does your husband do for work?

Participant: He’s a chef for Intuit.

Recruiter: Nice! Does he cook for you at home?

Participant: He does! He’s a great cook. During the week I usually feed the kids before he comes home but he will whip something up for the two of us and it’s always delicious. I’m very lucky!

You get the idea. The conversational approach got all of the key information from original screener, and then some. The participant is much more engaged, and an articulation question becomes irrelevant.

Taking it a step further, the recruiter now has established a rapport with the participant and can write up a blurb for the researcher, versus only typing stats into a grid. As a researcher, I appreciate getting an email with a blurb about a hold (e.g.“Rachel is a stay-at-home mom of two and very articulate. She meets all of the criteria but is a hold because her husband works in the technology industry (for Intuit), but as a chef.”) I can read it and quickly respond “Yes, let’s accept Rachel” (I was screening out people who work in tech, but a chef for a technology company will be fine for this project.) It’s far preferable over getting an email (“Attached is your latest grid, with a hold for your review”) which I then have to open and read through to find out the reason for the hold.

A conversational approach to recruiting brings about so many benefits but most of all, it’s consistent with our work and our industry values of being both qualitative and humane.

Tags:  data  qualitative research  survey methods 

PermalinkComments (0)
 

Exploring whether we need humans to do qualitative research

Posted By Administration, Tuesday, August 9, 2016
Exploring whether we need humans to do qualitative research

In a thought-provoking article published in the QRCA VIEWS magazine, Cynthia W. Jacobs explores whether we still need humans to do qualitative research. There’s a growing focus on “listening” to social media, and – in part forced by the volume of data generated this way – we see automated methods replacing human-powered analysis. There are two questions to consider here. First, who are we hearing and not hearing when we “listen” to social media? Second, what are we missing or misinterpreting when we rely on automated analysis?

The high-volume, free insights generated by social media will go to waste if we don’t use caution in interpretation. Regardless of the tool, it is critical that we don’t rely on the overall summary. Read the article for more details on the role of human-powered analysis vs. automated social listening methods and why the role of the qualitative researcher has a great new importance.

Tags:  analysis  cynthia jacobs  data  human-powered  humans  qrca views  qualitative research  social media 

PermalinkComments (0)
 
Contact Us

Qualitative Research Consultants Association
1000 Westgate Drive, Suite 252
St. Paul, MN 55114

phone:  651. 290. 7491
fax:  651. 290. 2266
info@qrca.org

Privacy Policy | Site Map
© 2019 Qualitative Research Consultants Association

This website is optimized for Firefox and Chrome. If you have difficulties using this site, see complete browser details.