Posted By Katrina Noelle,
Tuesday, February 25, 2020
California Consumer Privacy Act (CCPA)
The information provided in this blog post does not, and is not intended to, constitute legal advice. Please consult with your own legal counsel on your situation.
What is CCPA?
The California Consumer Privacy Act (CCPA) is a new state privacy law that impacts most market research and data analytics companies, and covers almost all consumer data. The law applies to almost any kind of data, and in any form, not just to electronic/online data.
GDPR vs. CCPA
CCPA’s goal is to give California residents greater control over how organizations collect, use and disclose their personal data. Although there are some similarities with General Data Protection Regulation (GDPR), CCPA also introduces additional rights for consumers such as the right to opt out from allowing a business to sell their personal data. Certain CCPA requirements overlap with the existing GDPR requirements, but several policies, processes and systems will still need updating to address differences between the two laws.
Who does CCPA apply to?
The International Association of Privacy Professionals (IAPP) estimates that the new law “will apply to more than 500,000 U.S. companies, the vast majority of which are small- to medium-sized enterprises.”
Basically, CCPA covers for-profit companies “that collect consumers’ personal information, or on the behalf of which such information is collected and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information, that does business in the State of California, and that satisfies one or more of the following thresholds:
- Have greater than $25 million in annual gross revenue;
- Annually handle personal information for 50,000 consumers; or
- Derive half of annual revenue from selling consumers’ personal information.
The CCPA only imposes obligations on a business and not on service providers directly. As defined under the CCPA, a “service provider” is a for-profit entity “that processes information on behalf of a business.” If your company does not meet the requirements above to qualify as a business, your company may still be subject to the vendor management obligations that a business is required to impose on its service providers.
EXAMPLE: a company that falls within the scope of the CCPA must require by contract that their suppliers that process information on behalf of the company only retain, use, or disclose such personal information for the specific purpose of performing the services as specified in the contract.
Because many marketing research and data analytics companies (as well as our clients) will be covered by CCPA, it’s something to look into no matter where you are based. The only way to really avoid this law will be for a company to have nothing to do with data on a California resident (including a California employee, independent contractor or participant). That’s hard to avoid when doing nationwide research projects!
It’s tempting to think that your company is “too small to worry.” But while some small companies may not be covered, it still will be hard for them to escape the law’s reach.
EXAMPLE: a small recruiting company that recruits less than 50,000 individuals for other organizations’ studies would be subject to this law if recruitment (the sale of consumers’ personal contact and qualifications for a study to the recruiter’s clients) makes up half or more of its annual revenue.
What do I do to comply?
Businesses that fall under the scope of the CCPA will need to update data practices and procedures in order to comply with certain CCPA disclosure requirements. Businesses that fail to comply with the CCPA may be subject to “monetary penalties, regulatory enforcement actions, and private rights of action.”
Based on conversations with experts I’ve spoken to on the topic, there are a few things you should do/consider to ensure you are CCPA compliant:
- Meet with your lawyer to determine if you need to be CCPA compliant and what steps you need to take in order to do so.
- Consider updating your operating agreements, written information security program (WISP) and/or incident response plan (IRP).
- Review your company’s agreements with service providers to be sure you are up to date with their requirements.
Note that since the law went into effect on Jan. 1, 2020, there will be updates to it; keep abreast of changes here: https://oag.ca.gov/privacy/ccpa or subscribe to the mailing list here: https://oag.ca.gov/privacy/ccpa/subscribe
Katrina is principal of KNow Research, a full service insights consultancy specializing in designing custom qualitative insights projects for 16+ years to unlock insights about brands and target audiences. She is also co-founder of Scoot Insights, whose trademarked ScootTM Sprint approach helps decision-makers choose the right direction.
| Comments (0)
Posted By Rachel Wang,
Thursday, February 13, 2020
Annual Conference Reporter on the Scene: UX Research is not a Synonym for Usability Testing
Presenter: Kristine Remer, JuneUX
At the QRCA 2020 Annual Conference, Kristine Remer started her presentation “UX Research is not a Synonym for Usability Testing” with a bold declaration, that she is not afraid of dancing in public and she used that throughout her presentation.
Dancing aside, throughout her presentation Kristine shared her gold mine of knowledge about the UX research world. She reminded all in attendance that UX research is not a synonym for usability testing. The headline of the presentation may be an “I-cannot-agree-more” truth for savvy UX researchers and designers, but an unknown land to explore for others. Kristine unveils this myth with her 3-part sharing- what is UX research, where UX researchers work, and how do UX researchers work. With each step, she lured the audience into the heart of the UX research world.
At the end of her presentation, Kristine encouraged the audience to further dive into the UX design world by joining the design twitter, mentor UX researchers, and exploring additional UX resources from a recent blog post.
Key Session Takeaways
So, what are my key takeaways from this fabulous session?
WHAT IS UX RESEARCH?
- First of all, UX stands for user experience.
- User experience is a broad field. This is because the USER is human, and humans have a vast range of experiences. Think of things like eating, drinking, sleeping, grooming, dressing, commuting, working, entertaining, socializing, reading, teaching, health-caring, dating, lovemaking, parenting, shopping, traveling and beyond.
- The research of the UX is to facilitate the design of the products or services related to the human experience. Some examples include to design a salsa holder/bottle, a webpage, an application, a museum, or the airline service system that makes sense to the users.
- UX research uses a ton of methodologies which are listed and explained on Kristine’s website.
WHERE UX RESEARCHERS WORK
UX researchers work in a vast range of places related to the design and innovation of the product or service, including innovation labs, on product, service or CX teams and in Centers of Excellence.
HOW UX RESEARCHERS WORK
The delivery is not in word-intensive reports, but the visually-juicy MAPS. To name a few, the Empathy Map, Task Analysis Grid, Storyboard, Story Map and Service Blueprint.
Kristine Remer has great moves for dancing as well as for UX Research Methodology. She has collected and summarized a whole page of the UX research methodology!
Final Comments and Takeaways
With all the different words and systems of methodology, UX Research and Qualitative Research have the same goal of creating positive social impact through deeper understanding and effectively sharing the Human Truth. Set no limit for your approaches, folks.
QRCA Reporter on the Scene: Rachel Wang, LTH Business Consulting
QRCA Annual Conference
QRCA Reporter on the Scene
| Comments (0)
Posted By Kay Corry Aubrey,
Tuesday, July 23, 2019
Updated: Tuesday, July 23, 2019
How Can Voice AI Help Qualitative Researchers?
Within three years, 50% of Web searches will be done via voice. Today almost one in four US households has access to a smart speaker such as Google Home or Alexa. Consumers are adopting voice technology faster than any other technology, including smart phones. Very soon voice artificial intelligence (AI) will become embedded in our everyday lives to the point where we may not even notice it anymore. How can qualitative researchers leverage this powerful trend?
For inspiration I spoke with four experts who are doing cool things with voice technology. They described unique ways to apply voice Artificial Intelligence (AI) that offer a preview on how this technology might transform our work as researchers. For example, consumers are shifting toward using their voice vs. their fingers to interact with technology and the Internet.
The Rise of the Talking Survey
Greg Hedges has had great success with voice-based surveys through virtual assistants such as Siri, Alexa and Google. According to him, “It’s like launching a focus group of one. People are interacting where they are most comfortable in their own home, using their own words. We’ve found that people are more spontaneous and natural when they talk vs. when they type.” Greg’s company also helps organizations integrate voice branding into their digital marketing ecosystem. Part of their expertise is redesigning a client’s SEO strategy to be phrase and question-based (vs. keyword based) to accommodate voice searches.
Ask Your Digital Twin Narrate Your Next Report
Domhnaill Hernon collaborates with artists to explore the deep connections between technology and human potential. He worked with Reeps One, a beatboxer, who fed hours of his audio recordings into Nokia’s AI machine. To their astonishment, the system returned new melodies he didn’t put in but sounded just like him. Rather than feeling threatened, the artist embraced the capability and now incorporates AI-generated tunes into his work. Soon this technology will be widely available, and you’ll be able to produce reports in your own voice that clients can listen to just like a podcast.
It’s hard to imagine how voice technology – and AI in general – will change our world. Technology is always a double-edged sword. On one hand, AI will be used to cure disease, make societies more efficient, and redistribute wealth so humans everywhere prosper. On the other, it might lead to a hardening of the social classes and a surveillance state. In a recent episode of 60 Minutes, AI expert Kai Fu Lee said that 40% of jobs will be eliminated within 15 years due to artificial intelligence. To empower ourselves we need to understand what AI is, how it works, its capabilities and limitations.
How Voice AI Works
As with any artificial intelligence, voice technology relies on two things: having access to a huge pool of data, and algorithms that look for patterns within the data. For voice, the algorithm is called Natural Language Processing (NLP). The result can only be as good as the data that are fed into the machine. Today in North America, Voice Assistants (VA) are 95% accurate if the person speaking is a white native-born man, 80% accurate if it’s a woman, and as low as 50% accurate if the person has an accent. This is because of the socially limited group of people who contribute their data by using voice assistants - VA users tend to be early adopters, white, and highly educated.
Jen Heape notes, “Natural Language Processing (NLP) cannot deal reliably with anyone who is not a white male, and this is deeply problematic, which is why Google and Amazon are giving away so much free so they can collect more complete samples.”
The algorithms that make up NLP leverage fixed rules of language around syntax, grammar, semantics. The algorithm can be taught, “if they say this, say that” and the machine learns the pattern. This capability allows the virtual assistant to process simple prescriptive (but useful) commands such as “turn on the lights,” “play NPR,” or “order more lettuce,” because the technology has learned the vocabulary and structure of English sentences.
Can a Machine Be Conversational?
However, voice technology is still very much in its infancy. The machine has no concept of culture or social inferences. As Heape noted, “If I were to say ‘The kids just got out of school’ and the listener is in the same time zone, they’d know it’s 3 or 3:30. However, the voice technology would not be able to infer this because it lacks the data.”
Freddie Feldman leads a voice design team which creates chatbots and conversational interfaces for medical environments. According to Feldman, chat bots and voice technology in general are helpful in medical environments to get quick answers to predictable questions. “But for anything more crucial, dynamic or that requires understanding the other person’s psychology you’ll need to call someone in the end.” In theory, it’s possible that voice technology will have deeper human characteristics one day. “The technology is there. It’s just a question of someone piecing it together.”
It’s hard to imagine any machine being able to understand and integrate all the rich signals we send and receive in a conversation: the look on a person’s face, the tone of their voice, their diction, their physical posture, our perception of anger and pleasure, or what they are thinking. These elements are as essential to meaning and human connection as the words themselves. As Heape said, “VAs will never replace the human. There will always be a human pulling the lever. We decide what the machine needs to learn. VAs will remove the arduous elements. But we need a human to interpret the results and analyze it. We’re still so much at the beginning of it — we have not fed the machine.”
My feeling is there will be abundant opportunities for qualitative researchers, but – first – we need to understand the beast and what it cannot do.
Learn More about Artificial Intelligence and Voice Technology
Thomas H Davenport and Rajeev Rananki, “Artificial Intelligence for the Real World; Don’t start with moonshots”, Harvard Business Review, January-February 2018. (free download).
Joanna Penn, “9 Ways That Artificial Intelligence (AI) Will Disrupt Authors And The Publishing Industry”, Creative Penn Podcast #437, July 2019.
Oz Woloshyn and Karah Preiss, Sleepwalkers podcast on iHeartRadio.
Voice 2019 Summit, New Jersey Institute of Technology, July 22 – 25.
Thank you to the experts I spoke with while researching this post:
- Freddie Feldman, Voice Design Director at Wolters Kluwer Health
- Jen Heape, Co-founder of Vixen Labs
- Greg Hedges, VP of Emerging Experiences at RAIN agency
- Domhnaill Hernon, Head of Experiments in Art and Technology at Nokia Bell Labs.
About the Author
Kay Corry Aubrey is a User Experience consultant and trainer who shows her customers how to make their products more easily understandable to ordinary people through usability testing and in-home studies. For the past few years she’s focused on products and services for older people that improve their lives, helping them remain independent and in their home. Kay sees great potential in voice-enabled products geared towards older folks. Her clients include Pillo Health, Stanley Black and Decker Futures, and the Centers for Medicare and Medicaid Services. Kay is the Luminaries Editor for the QRCA VIEWS magazine and a RIVA-certified Master Moderator and Trainer.
| Comments (2)
Posted By Lauren Isaacson,
Tuesday, February 19, 2019
Updated: Friday, February 15, 2019
A friend of mine is a designer who has worked with various divisions of the government of Canada. She told me about working with one particular department. She would show them potential design improvements to existing websites based on qualitative usability tests and they would invariably come back with the question, "How do you know it's better?"
Indeed, how does one know for sure a new website is better than the existing version? As researchers, we know the answer — benchmarking data. However, what's the best way to benchmark the usability of a system? Two methods are commonly used by UX researchers:
- System Usability Scale (SUS)
- Single Ease Question (SEQ)
System Usability Scale (SUS)
SUS is the most widely used and documented of the two options, with references in over 1,300 articles and publications. It's also free and applicable to pretty much any piece of technology. SUS consists of 10 questions, all using the same 5-point scale.
1 Strongly Agree/2 Agree/3 Neutral/4 Disagree/5 Strongly Disagree
- I think that I would use this system frequently.
- I found the system unnecessarily complex.
- I thought the system was easy to use.
- I think that I would need the support of a technical person to be able to use this system.
- I found the various functions in this systemwide well integrated.
- I thought there was too much inconsistency in this system.
- I would imagine that most people would learn to use this system very quickly.
- I found the system very cumbersome to use.
- I felt very confident using the system.
- I needed to learn a lot of things before I could get going with this system.
The numbering of the questions is essential for calculating the overall score. For odd-numbered questions, subtract 1 from each response and subtract the responses from each even-numbered question from 5. This should leave you with a final score between 0 and 40. This score is then multiplied by 2.5 to increase the range of the score to 0 to 100. This final number is a score and should not be confused with a percentage.
Lucky for us, the good folks at Measuring U have analyzed the responses from 5,000 users evaluating 500 websites and have come up with a grading system to help interpret the scores:
- ~85+ = A
- ~75 - 84 = B
- ~65 - 74 = C, 68 is the average score
- ~55 - 67 = D
- ~45 or under = F
If you would like a more official and accurate grading system, you can buy Measuring U's guide and calculator package.
Single Ease Question (SEQ)
The other method is SEQ. Single Ease Question is less commonly utilized and has no documented standard wording, but it has the advantage of being much shorter than SUS. I am always in favor of making surveys shorter. SEQ consists of one question rated on a 7-point scale covering ease of completing a technology-enabled task. Like SUS, it is also free and applicable to almost any piece of technology.
- Overall, how difficult or easy did you find this task?
- Very easy
- Somewhat easy
- Somewhat difficult
- Very difficult
Because there is no documented standard wording of the SEQ, you can tailor the question to cover the metric your stakeholders are most concerned about — confidence, speed, usefulness, etc. The SEQ also pairs very well with unmoderated usability tests often used by researchers who need quick feedback on interfaces.
Measuring U found the average scores across multiple websites to be about 5 (Somewhat easy), but this system is less documented than SUS. Therefore, use it to compare the before and after of a redesign, but not against other sites as you can do with SUS. If you're looking for more than just benchmarking data, you can also add two open-ended questions to the SEQ without risking excessive length.
- What would make this website/form/app/system better?
- What is something you would fix on this website/form/app/system?
These voluntary open-ends give respondents the opportunity to offer their suggestions about what is wrong with the system and how they might make it better. It provides the potential to understand the “why” behind the data.
In the end, by using either of these UX survey question sets before a system redesign is launched and after, you will be able to tell your stakeholders if a redesign is indeed an improvement over the old, and how much better it is.
Lauren Isaacson is a UX and market research consultant living in Vancouver, British Columbia. Over her career she has consulted for various agencies and companies, such as Nissan/Infiniti, Microsoft, Blink UX, TELUS Digital, Applause, Mozilla, and more. You can reach her through her website, LinkedIn, and Twitter.
| Comments (1)
Posted By Alison Rak,
Monday, March 20, 2017
Nobody likes a telemarketer, so why use their techniques in recruiting? Why are researchers still getting away with putting participants through long, boring, tedious screeners? A conversational approach to your recruit may seem difficult or impractical, but if done well can yield excellent results in the way of highly-qualified, happier participants.
What is a conversational recruit? It’s a way of getting all of the answers to your screener, and then some, through a friendly conversation. There are a few key requirements for success, however. First, you need to be completely aligned with your recruiter on your screening criteria. This typically requires a detailed conversation, backed up in writing, versus just emailing over a screener. Second, you need to trust your recruiter completely that they will not lead the participant, and that they have your best interests in mind. Finally, you need a recruiter who will have a small number of qualified, intelligent people who are well-trained with your project working for you, versus a firm that will put a large number of interchangeable dialers on your project.
Some researchers attempt a conversational recruit by writing a conversational screener, but these fall short. Potential participants can tell when someone is reading from a script and it’s a turnoff. A skilled, conversational recruiter, on the other hand, can knock off a number of screener questions in a brief exchange. Here’s an example of three questions from a typical screener:
First, a written introductory paragraph that, no matter how casual the recruiter tries to make it, will come across as a script and set the tone for the rest of the exchange. Then come the questions:
- What age range do you fall into?
- under 18 (terminate)
- 55 or older (terminate)
2. Do you have kids living at home? If so, what are their ages?
3. Do you or anyone in your household work in any of the following industries?
- Public relations
- etc. etc. etc.
3. (Articulation question) If you could go anywhere on vacation, where would you go and why?
Now, imagine trying to achieve the same thing through a conversational approach.
After a brief introduction….
Recruiter: Tell me a little about yourself. For example, how old are you, what do you do for work, and who do you live with?
Potential participants: Well, let’s see…. I’m 42 years old, a stay-at-home mom. I live with my husband and two kids, plus a golden retriever who acts like my third kid!
Recruiter: “Oh, I love goldens! How old are your kids?
Participant: My daughter Izzy is four and my son Burke is eight.
Recruiter: Wow, you have your hands full. What does your husband do for work?
Participant: He’s a chef for Intuit.
Recruiter: Nice! Does he cook for you at home?
Participant: He does! He’s a great cook. During the week I usually feed the kids before he comes home but he will whip something up for the two of us and it’s always delicious. I’m very lucky!
You get the idea. The conversational approach got all of the key information from original screener, and then some. The participant is much more engaged, and an articulation question becomes irrelevant.
Taking it a step further, the recruiter now has established a rapport with the participant and can write up a blurb for the researcher, versus only typing stats into a grid. As a researcher, I appreciate getting an email with a blurb about a hold (e.g.“Rachel is a stay-at-home mom of two and very articulate. She meets all of the criteria but is a hold because her husband works in the technology industry (for Intuit), but as a chef.”) I can read it and quickly respond “Yes, let’s accept Rachel” (I was screening out people who work in tech, but a chef for a technology company will be fine for this project.) It’s far preferable over getting an email (“Attached is your latest grid, with a hold for your review”) which I then have to open and read through to find out the reason for the hold.
A conversational approach to recruiting brings about so many benefits but most of all, it’s consistent with our work and our industry values of being both qualitative and humane.
| Comments (0)
Posted By Administration,
Tuesday, August 9, 2016
Exploring whether we need humans to do qualitative research
In a thought-provoking article published in the QRCA VIEWS magazine, Cynthia W. Jacobs explores whether we still need humans to do qualitative research. There’s a growing focus on “listening” to social media, and – in part forced by the volume of data generated this way – we see automated methods replacing human-powered analysis. There are two questions to consider here. First, who are we hearing and not hearing when we “listen” to social media? Second, what are we missing or misinterpreting when we rely on automated analysis?
The high-volume, free insights generated by social media will go to waste if we don’t use caution in interpretation. Regardless of the tool, it is critical that we don’t rely on the overall summary. Read the article for more details on the role of human-powered analysis vs. automated social listening methods and why the role of the qualitative researcher has a great new importance.
| Comments (0)