This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Qual Power
Blog Home All Blogs

Annual Conference Reporter on the Scene: UX Live! Revitalizing the Customer Experience

Posted By Kayte Hamilton, Thursday, May 2, 2019
Updated: Wednesday, May 1, 2019
Annual Conference Reporter on the Scene: : UX Live! Revitalizing the Customer Experience

UX Live!

Summary:
At the 2019 QRCA Annual Conference Shaili Bhatt and Nancy Baum, both from C+R Research, gave all of us quallies a live, interactive demo of mobile usability testing. Through small group work on writing effective mobile User Experience (UX) questions and a Q/A session that had the room buzzing, we gained helpful practices to help us execute digital user experience sessions.

Key Takeaways:
I was thrilled to learn there are several easy-to-use applications that can be used to conduct digital usability testing on the market. While the available platforms range in pricing and features, many of them combine live video recordings, task based assignments, and real time updates that QRC’s can utilize to conduct more usability testing sessions nationwide while working remotely.

C+R Researcgh

Putting it into practice:
I am excited to utilize the tools we tested in session to conduct remote usability testing sessions!

A-ha moment:
Being able to see the live video with screen animation was a revelation. It really does replicate what you see in person, but allows you to be in more than one place at a time as a moderator

The presented platforms and hands on application in session was extremely useful for many of us in the room to understand the impact these platforms could have on our work. The question and answer session was lively, with many questions that sparked insightful conversations, it’s clear these tools are going to make a big impact on how many of us work!
 

Kayte HamiltonQRCA Reporter on the Scene:

Kayte Hamilton
Issues & Answers Network, Inc.
LinkedIn: https://www.linkedin.com/in/kaytehamilton/

Tags:  QRCA Annual Conference  QRCA Reporter on the Scene  Qualitative Research  user experience  UX 

PermalinkComments (0)
 

Two Ways to Quantify User Experience

Posted By Lauren Isaacson, Tuesday, February 19, 2019
Updated: Friday, February 15, 2019

Quantify User Experience

A friend of mine is a designer who has worked with various divisions of the government of Canada. She told me about working with one particular department. She would show them potential design improvements to existing websites based on qualitative usability tests and they would invariably come back with the question, "How do you know it's better?"

Indeed, how does one know for sure a new website is better than the existing version? As researchers, we know the answer — benchmarking data. However, what's the best way to benchmark the usability of a system? Two methods are commonly used by UX researchers:

  • System Usability Scale (SUS)
  • Single Ease Question (SEQ)

System Usability Scale (SUS)

SUS is the most widely used and documented of the two options, with references in over 1,300 articles and publications. It's also free and applicable to pretty much any piece of technology. SUS consists of 10 questions, all using the same 5-point scale.

1 Strongly Agree/2 Agree/3 Neutral/4 Disagree/5 Strongly Disagree

  1. I think that I would use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this systemwide well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

The numbering of the questions is essential for calculating the overall score. For odd-numbered questions, subtract 1 from each response and subtract the responses from each even-numbered question from 5. This should leave you with a final score between 0 and 40. This score is then multiplied by 2.5 to increase the range of the score to 0 to 100. This final number is a score and should not be confused with a percentage.

Lucky for us, the good folks at Measuring U have analyzed the responses from 5,000 users evaluating 500 websites and have come up with a grading system to help interpret the scores:

  • ~85+ = A
  • ~75 - 84 = B
  • ~65 - 74 = C, 68 is the average score
  • ~55 - 67 = D
  • ~45 or under = F

If you would like a more official and accurate grading system, you can buy Measuring U's guide and calculator package.

Single Ease Question (SEQ)

The other method is SEQ. Single Ease Question is less commonly utilized and has no documented standard wording, but it has the advantage of being much shorter than SUS. I am always in favor of making surveys shorter. SEQ consists of one question rated on a 7-point scale covering ease of completing a technology-enabled task. Like SUS, it is also free and applicable to almost any piece of technology.

  • Overall, how difficult or easy did you find this task?
    • Very easy
    • Easy
    • Somewhat easy
    • Neutral
    • Somewhat difficult
    • Difficult
    • Very difficult

Because there is no documented standard wording of the SEQ, you can tailor the question to cover the metric your stakeholders are most concerned about — confidence, speed, usefulness, etc. The SEQ also pairs very well with unmoderated usability tests often used by researchers who need quick feedback on interfaces.

Measuring U found the average scores across multiple websites to be about 5 (Somewhat easy), but this system is less documented than SUS. Therefore, use it to compare the before and after of a redesign, but not against other sites as you can do with SUS. If you're looking for more than just benchmarking data, you can also add two open-ended questions to the SEQ without risking excessive length.

  • What would make this website/form/app/system better?

Alternatively,

  • What is something you would fix on this website/form/app/system?

These voluntary open-ends give respondents the opportunity to offer their suggestions about what is wrong with the system and how they might make it better. It provides the potential to understand the “why” behind the data.

In the end, by using either of these UX survey question sets before a system redesign is launched and after, you will be able to tell your stakeholders if a redesign is indeed an improvement over the old, and how much better it is.

Sources:

Lauren Isaccson

Lauren Isaacson is a UX and market research consultant living in Vancouver, British Columbia. Over her career she has consulted for various agencies and companies, such as Nissan/Infiniti, Microsoft, Blink UX, TELUS Digital, Applause, Mozilla, and more. You can reach her through her website, LinkedIn, and Twitter.

Tags:  data  QRCA Digest  qualitative research  user experience 

PermalinkComments (0)
 
Contact Us

Qualitative Research Consultants Association
1000 Westgate Drive, Suite 252
St. Paul, MN 55114

phone:  651. 290. 7491
fax:  651. 290. 2266
info@qrca.org

Privacy Policy | Site Map | Site Map | Sign Out
© 2019 Qualitative Research Consultants Association

This website is optimized for Firefox and Chrome. If you have difficulties using this site, see complete browser details.