This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Qual Power
Blog Home All Blogs

How Can Voice AI Help Qualitative Researchers?

Posted By Kay Corry Aubrey, Tuesday, July 23, 2019
Updated: Tuesday, July 23, 2019

How Can Voice AI Help Qualitative Researchers?

VOICE Artificial Intelligence

Within three years, 50% of Web searches will be done via voice. Today almost one in four US households has access to a smart speaker such as Google Home or Alexa. Consumers are adopting voice technology faster than any other technology, including smart phones. Very soon voice artificial intelligence (AI) will become embedded in our everyday lives to the point where we may not even notice it anymore. How can qualitative researchers leverage this powerful trend?

For inspiration I spoke with four experts who are doing cool things with voice technology. They described unique ways to apply voice Artificial Intelligence (AI) that offer a preview on how this technology might transform our work as researchers. For example, consumers are shifting toward using their voice vs. their fingers to interact with technology and the Internet.

 

The Rise of the Talking Survey

Greg Hedges has had great success with voice-based surveys through virtual assistants such as Siri, Alexa and Google. According to him, “It’s like launching a focus group of one. People are interacting where they are most comfortable in their own home, using their own words. We’ve found that people are more spontaneous and natural when they talk vs. when they type.” Greg’s company also helps organizations integrate voice branding into their digital marketing ecosystem. Part of their expertise is redesigning a client’s SEO strategy to be phrase and question-based (vs. keyword based) to accommodate voice searches. 

 

Ask Your Digital Twin Narrate Your Next Report

Domhnaill Hernon collaborates with artists to explore the deep connections between technology and human potential. He worked with Reeps One, a beatboxer, who fed hours of his audio recordings into Nokia’s AI machine. To their astonishment, the system returned new melodies he didn’t put in but sounded just like him. Rather than feeling threatened, the artist embraced the capability and now incorporates AI-generated tunes into his work. Soon this technology will be widely available, and you’ll be able to produce reports in your own voice that clients can listen to just like a podcast.

It’s hard to imagine how voice technology – and AI in general – will change our world. Technology is always a double-edged sword. On one hand, AI will be used to cure disease, make societies more efficient, and redistribute wealth so humans everywhere prosper. On the other, it might lead to a hardening of the social classes and a surveillance state. In a recent episode of 60 Minutes, AI expert Kai Fu Lee said that 40% of jobs will be eliminated within 15 years due to artificial intelligence. To empower ourselves we need to understand what AI is, how it works, its capabilities and limitations.

 

How Voice AI Works

As with any artificial intelligence, voice technology relies on two things: having access to a huge pool of data, and algorithms that look for patterns within the data. For voice, the algorithm is called Natural Language Processing (NLP). The result can only be as good as the data that are fed into the machine. Today in North America, Voice Assistants (VA) are 95% accurate if the person speaking is a white native-born man, 80% accurate if it’s a woman, and as low as 50% accurate if the person has an accent. This is because of the socially limited group of people who contribute their data by using voice assistants - VA users tend to be early adopters, white, and highly educated.

Jen Heape notes, “Natural Language Processing (NLP) cannot deal reliably with anyone who is not a white male, and this is deeply problematic, which is why Google and Amazon are giving away so much free so they can collect more complete samples.”

The algorithms that make up NLP leverage fixed rules of language around syntax, grammar, semantics. The algorithm can be taught, “if they say this, say that” and the machine learns the pattern. This capability allows the virtual assistant to process simple prescriptive (but useful) commands such as “turn on the lights,” “play NPR,” or “order more lettuce,” because the technology has learned the vocabulary and structure of English sentences.

 

Can a Machine Be Conversational?

However, voice technology is still very much in its infancy. The machine has no concept of culture or social inferences. As Heape noted, “If I were to say ‘The kids just got out of school’ and the listener is in the same time zone, they’d know it’s 3 or 3:30. However, the voice technology would not be able to infer this because it lacks the data.”

Freddie Feldman leads a voice design team which creates chatbots and conversational interfaces for medical environments. According to Feldman, chat bots and voice technology in general are helpful in medical environments to get quick answers to predictable questions. “But for anything more crucial, dynamic or that requires understanding the other person’s psychology you’ll need to call someone in the end.” In theory, it’s possible that voice technology will have deeper human characteristics one day. “The technology is there. It’s just a question of someone piecing it together.”

It’s hard to imagine any machine being able to understand and integrate all the rich signals we send and receive in a conversation: the look on a person’s face, the tone of their voice, their diction, their physical posture, our perception of anger and pleasure, or what they are thinking. These elements are as essential to meaning and human connection as the words themselves. As Heape said, “VAs will never replace the human. There will always be a human pulling the lever. We decide what the machine needs to learn. VAs will remove the arduous elements. But we need a human to interpret the results and analyze it. We’re still so much at the beginning of it — we have not fed the machine.”

My feeling is there will be abundant opportunities for qualitative researchers, but – first – we need to understand the beast and what it cannot do.

 

Learn More about Artificial Intelligence and Voice Technology

Thomas H Davenport and Rajeev Rananki, “Artificial Intelligence for the Real World; Don’t start with moonshots”, Harvard Business Review, January-February 2018. (free download).

Joanna Penn, “9 Ways That Artificial Intelligence (AI) Will Disrupt Authors And The Publishing Industry”, Creative Penn Podcast #437, July 2019.

Oz Woloshyn and Karah Preiss, Sleepwalkers podcast on iHeartRadio.

Voice 2019 Summit, New Jersey Institute of Technology, July 22 – 25.

 

Acknowledgements

Thank you to the experts I spoke with while researching this post:

  • Freddie Feldman, Voice Design Director at Wolters Kluwer Health
  • Jen Heape, Co-founder of Vixen Labs
  • Greg Hedges, VP of Emerging Experiences at RAIN agency
  • Domhnaill Hernon, Head of Experiments in Art and Technology at Nokia Bell Labs.

 

About the Author

Kay Corry Aubrey is a User Experience consultant and trainer who shows her customers how to make their products more easily understandable to ordinary people through usability testing and in-home studies. For the past few years she’s focused on products and services for older people that improve their lives, helping them remain independent and in their home. Kay sees great potential in voice-enabled products geared towards older folks. Her clients include Pillo Health, Stanley Black and Decker Futures, and the Centers for Medicare and Medicaid Services. Kay is the Luminaries Editor for the QRCA VIEWS magazine and a RIVA-certified Master Moderator and Trainer.

Website: www.UsabilityResources.net

LinkedIn: https://www.linkedin.com/in/kaycorryaubrey/

Tags:  AI  data  QRCA Digest  Research Methodologies 

PermalinkComments (2)
 

Two Ways to Quantify User Experience

Posted By Lauren Isaacson, Tuesday, February 19, 2019
Updated: Friday, February 15, 2019

Quantify User Experience

A friend of mine is a designer who has worked with various divisions of the government of Canada. She told me about working with one particular department. She would show them potential design improvements to existing websites based on qualitative usability tests and they would invariably come back with the question, "How do you know it's better?"

Indeed, how does one know for sure a new website is better than the existing version? As researchers, we know the answer — benchmarking data. However, what's the best way to benchmark the usability of a system? Two methods are commonly used by UX researchers:

  • System Usability Scale (SUS)
  • Single Ease Question (SEQ)

System Usability Scale (SUS)

SUS is the most widely used and documented of the two options, with references in over 1,300 articles and publications. It's also free and applicable to pretty much any piece of technology. SUS consists of 10 questions, all using the same 5-point scale.

1 Strongly Agree/2 Agree/3 Neutral/4 Disagree/5 Strongly Disagree

  1. I think that I would use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this systemwide well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

The numbering of the questions is essential for calculating the overall score. For odd-numbered questions, subtract 1 from each response and subtract the responses from each even-numbered question from 5. This should leave you with a final score between 0 and 40. This score is then multiplied by 2.5 to increase the range of the score to 0 to 100. This final number is a score and should not be confused with a percentage.

Lucky for us, the good folks at Measuring U have analyzed the responses from 5,000 users evaluating 500 websites and have come up with a grading system to help interpret the scores:

  • ~85+ = A
  • ~75 - 84 = B
  • ~65 - 74 = C, 68 is the average score
  • ~55 - 67 = D
  • ~45 or under = F

If you would like a more official and accurate grading system, you can buy Measuring U's guide and calculator package.

Single Ease Question (SEQ)

The other method is SEQ. Single Ease Question is less commonly utilized and has no documented standard wording, but it has the advantage of being much shorter than SUS. I am always in favor of making surveys shorter. SEQ consists of one question rated on a 7-point scale covering ease of completing a technology-enabled task. Like SUS, it is also free and applicable to almost any piece of technology.

  • Overall, how difficult or easy did you find this task?
    • Very easy
    • Easy
    • Somewhat easy
    • Neutral
    • Somewhat difficult
    • Difficult
    • Very difficult

Because there is no documented standard wording of the SEQ, you can tailor the question to cover the metric your stakeholders are most concerned about — confidence, speed, usefulness, etc. The SEQ also pairs very well with unmoderated usability tests often used by researchers who need quick feedback on interfaces.

Measuring U found the average scores across multiple websites to be about 5 (Somewhat easy), but this system is less documented than SUS. Therefore, use it to compare the before and after of a redesign, but not against other sites as you can do with SUS. If you're looking for more than just benchmarking data, you can also add two open-ended questions to the SEQ without risking excessive length.

  • What would make this website/form/app/system better?

Alternatively,

  • What is something you would fix on this website/form/app/system?

These voluntary open-ends give respondents the opportunity to offer their suggestions about what is wrong with the system and how they might make it better. It provides the potential to understand the “why” behind the data.

In the end, by using either of these UX survey question sets before a system redesign is launched and after, you will be able to tell your stakeholders if a redesign is indeed an improvement over the old, and how much better it is.

Sources:

Lauren Isaccson

Lauren Isaacson is a UX and market research consultant living in Vancouver, British Columbia. Over her career she has consulted for various agencies and companies, such as Nissan/Infiniti, Microsoft, Blink UX, TELUS Digital, Applause, Mozilla, and more. You can reach her through her website, LinkedIn, and Twitter.

Tags:  data  QRCA Digest  qualitative research  user experience 

PermalinkComments (0)
 

Ditch the script; have a conversation instead!

Posted By Alison Rak, Monday, March 20, 2017

Nobody likes a telemarketer, so why use their techniques in recruiting? Why are researchers still  getting away with putting participants through long, boring, tedious screeners? A conversational approach to your recruit may seem difficult or impractical, but if done well can yield excellent results in the way of highly-qualified, happier participants.

What is a conversational recruit? It’s a way of getting all of the answers to your screener, and then some, through a friendly conversation. There are a few key requirements for success, however. First, you need to be completely aligned with your recruiter on your screening criteria. This typically requires a detailed conversation, backed up in writing, versus just emailing over a screener. Second, you need to trust your recruiter completely that they will not lead the participant, and that they have your best interests in mind. Finally, you need a recruiter who will have a small number of qualified, intelligent people who are well-trained with your project working for you, versus a firm that will put a large number of interchangeable dialers on your project.

Some researchers attempt a conversational recruit by writing a conversational screener, but these fall short. Potential participants can tell when someone is reading from a script and it’s a turnoff. A skilled, conversational recruiter, on the other hand, can knock off a number of screener questions in a brief exchange. Here’s an example of three questions from a typical screener:

First, a written introductory paragraph that, no matter how casual the recruiter tries to make it, will come across as a script and set the tone for the rest of the exchange. Then come the questions:

  1. What age range do you fall into?
    1. under 18 (terminate)
    2. 18-24
    3. 25-34
    4. 35-44
    5. 45-54
    6. 55 or older (terminate)

2. Do you have kids living at home? If so, what are their ages?

3. Do you or anyone in your household work in any of the following industries?

  1. Education
  2. Marketing
  3. Advertising
  4. Public relations
  5. Transportation
  6. Technology
  7. etc. etc. etc.

3. (Articulation question) If you could go anywhere on vacation, where would you go and why?

Now, imagine trying to achieve the same thing through a conversational approach.

After a brief introduction….

Recruiter: Tell me a little about yourself. For example, how old are you, what do you do for work, and who do you live with?

Potential participants: Well, let’s see…. I’m 42 years old, a stay-at-home mom. I live with my husband and two kids, plus a golden retriever who acts like my third kid!

Recruiter: “Oh, I love goldens! How old are your kids?

Participant: My daughter Izzy is four and my son Burke is eight.

Recruiter: Wow, you have your hands full. What does your husband do for work?

Participant: He’s a chef for Intuit.

Recruiter: Nice! Does he cook for you at home?

Participant: He does! He’s a great cook. During the week I usually feed the kids before he comes home but he will whip something up for the two of us and it’s always delicious. I’m very lucky!

You get the idea. The conversational approach got all of the key information from original screener, and then some. The participant is much more engaged, and an articulation question becomes irrelevant.

Taking it a step further, the recruiter now has established a rapport with the participant and can write up a blurb for the researcher, versus only typing stats into a grid. As a researcher, I appreciate getting an email with a blurb about a hold (e.g.“Rachel is a stay-at-home mom of two and very articulate. She meets all of the criteria but is a hold because her husband works in the technology industry (for Intuit), but as a chef.”) I can read it and quickly respond “Yes, let’s accept Rachel” (I was screening out people who work in tech, but a chef for a technology company will be fine for this project.) It’s far preferable over getting an email (“Attached is your latest grid, with a hold for your review”) which I then have to open and read through to find out the reason for the hold.

A conversational approach to recruiting brings about so many benefits but most of all, it’s consistent with our work and our industry values of being both qualitative and humane.

Tags:  data  qualitative research  survey methods 

PermalinkComments (0)
 

Exploring whether we need humans to do qualitative research

Posted By Administration, Tuesday, August 9, 2016
Exploring whether we need humans to do qualitative research

In a thought-provoking article published in the QRCA VIEWS magazine, Cynthia W. Jacobs explores whether we still need humans to do qualitative research. There’s a growing focus on “listening” to social media, and – in part forced by the volume of data generated this way – we see automated methods replacing human-powered analysis. There are two questions to consider here. First, who are we hearing and not hearing when we “listen” to social media? Second, what are we missing or misinterpreting when we rely on automated analysis?

The high-volume, free insights generated by social media will go to waste if we don’t use caution in interpretation. Regardless of the tool, it is critical that we don’t rely on the overall summary. Read the article for more details on the role of human-powered analysis vs. automated social listening methods and why the role of the qualitative researcher has a great new importance.

Tags:  analysis  cynthia jacobs  data  human-powered  humans  qrca views  qualitative research  social media 

PermalinkComments (0)
 
Contact Us

QRCA
1000 Westgate Drive, Suite 252
St. Paul, MN 55114

phone:  651. 290. 7491
fax:  651. 290. 2266
info@qrca.org

Privacy Policy | Site Map | Sign Out
© 2019 Qualitative Research Consultants Association

This website is optimized for Firefox and Chrome. If you have difficulties using this site, see complete browser details.