Posted By Ted Kendall,
Tuesday, February 11, 2020
Updated: Tuesday, February 11, 2020
Adapting Your Listening Skills to the Online World
By: Ted Kendall
As a successful qually, you intuitively know the importance of listening, how to listen well, and how to show participants that you are listening.
Listening is important because it engenders trust, creates rapport, and opens participants up.
In a physical setting, the key things we do to listen, and to show we are listening, include:
- Asking questions in response to participant’s thoughts
- Using verbal and non-verbal cues to show how you are listening
- Letting participants complete their own sentences
- Maintaining eye contact
- Acknowledging comments in specific ways like boarding or post-it notes
You will have noticed that most involve physicality—you have to be there in real life.
So, how do you listen, and just as importantly, show you are listening, in online qual?
Before we get into this, let me clarify that when I am talking about online qual in this context, I am referring to text-based online qual—primarily bulletin board style. While webcam interviews may be considered online, real life listening skills can be applied to the medium fairly easily.
Set Expectations to Counter Online Research Misperceptions
A unique challenge with online qual is that participants don’t necessarily know the difference between a survey and a qualitative discussion, so they often treat the study as if it were a survey. And they often believe that any interactions will be with a chatbot, not a real person.
It’s critical to counter these widely held beliefs and set the appropriate expectations up front. Tell participants you are listening to what they will say. And let them know it’s not a survey—it’s a conversation.
I can sometimes be pretty blunt about this—even going so far as to tell participants that if they just speed through the answers to my questions, they will not get the incentive. And then, when someone does that, I follow through on the promise and call them on it. Often it changes their interactions. Sometimes it doesn’t. But they definitely know you are listening. And, if the discussion is open to the whole group, others will see that you are listening as well.
Depending on the platform, you can use the messaging tools as well as the landing pages to accomplish this. And if the tools aren’t there, just use email or text, even phone, outside of the application.
I also make it a habit to reply to every participant post in the introductions—much like I do in a traditional focus group setting, or for that matter, in a conversation with a stranger. These replies can often reflect common ground, ” I love spending the day in the mountains with my dog too. What kind of dog do you have?” That’s not a question that will provide rich insights, but it will help open up the participant and really shows you are listening.
It’s critical to establish early in the conversation that you are a living, breathing, listening human being—not some chatbot or AI ghost in the machine. This has a huge impact on how participants approach your conversation.
Avoid AI Tools
Several online platform providers are touting AI generated responses to participants. All I can say is that this is what we get when we let the programmers drive development. Avoid this feature. Yes, it saves you time during the discussion. But it also removes you from the conversation—you are no longer actively listening. You wouldn’t let a robot take over your focus group session just to save time, would you?
Also, AI is not yet perfect. And it needs to be in this case. It’s not a life or death situation, unless you consider the life or death of the research conversation. Even if the AI gets 90% of the interactions correct, there is that 10% that will suck the air right out of your conversation with that participant. If you are using a group setting, other participants will see the mistake and the negative impact becomes exponential.
So just don’t do it. The potential losses greatly outweigh the potential time savings. Besides, actually responding manually forces you to listen and learn—which is what this is all about. Don’t let a robot take your job.
How to Digitally Use “Non-verbal” Cues and Maintain Eye Contact
In the online, text-based world, you certainly can’t maintain eye contact, nor can you provide non-verbal cues to show you are listening. So how do you employ those key principles of listening in an online, text-based world?
Probably the most obvious way is replying to participants’ posts with questions to better understand what they have said or get some clarification on their comment. Yes, I am talking about the same probing questions we lay on participants in focus groups and interviews. These probing questions work just as well online as they do in real life.
To replace those non-verbal cues, I have found it quite effective to comment or ask questions even when there is no need to do so. The idea is that by just saying something, participants recognize that you are there and you are reading what they are posting—you are listening.
Sometimes it is easy to just copy and paste the same general comment to several participants when you do this. If the participants can’t see one another, this is fine and saves you time. But if the participants can see each other, then it just makes you look like a robot.
It’s important when making comments just to show yourself to not require a reply—often this is an option. I like to just thank people for providing quality detail or thank them for an interesting take on the topic. The important thing is to personalize it a bit, to keep it from sounding generic.
Another way to show you are listening is to use the messaging app within the platform to hold meta conversations outside the actual discussion. I make it a point to send reminders at set times as well as thank-yous at the end of the day of discussion.
These messages don’t have to be just logistical in nature. You can also use them to show you are listening. Sometimes I will include a comment about some of the discussion—an insight that came through for the whole group of participants, or sometimes personalizing it to a specific participant.
In the end, listening is important to successful qual, whether you are in the same room as the participant or interacting digitally. It’s just how you listen, and how you show that you are listening, that can take a little adjustment in the digital qual world. But it’s no less important and no less doable.
Ted Kendall is the founder of TripleScoop, a boutique research agency that has a focus on online qualitative. Ted got to this place in his career by being in the right place at the right time to pioneer in early online methods. He was a co-founder of QualTalk that became 20/20 Research’s QualBoards. He learned how to moderate online qual through trial and error and has moderated hundreds of online qual discussions and interviews since that first one in 1997. And he is usually a good listener.
| Comments (0)
Posted By Foster Winter,
Tuesday, January 28, 2020
Super-qualitative! Using Qual Skills Beyond Market Research
By: Foster Winter
Please be assured, my qualitative colleagues, this subject is not intended to demean the discipline of market research. We love MR. However, over time we found that our qualitative universe was expanding. Now before you delve into astrophysics, we promise to keep our discussion more earthbound.
Whether you are in the early stages of your qualitative career, having mid-career reflections or thinking of winding down your MR-based practice, we’ve found some examples of adjacencies that may serve as thought provoking for you.
The Operating Theatre
One of our colleagues has used her qualitative background to help in a most important aspect of the world of medicine. Many of you will remember Lauren Woodiwiss as an active member of QRCA for many years. As an avocation, Lauren had been involved in community theatre. As she moved into the next phase of her career, Lauren continued to hone her acting skills, becoming a professional actor.
She says that one of her most rewarding roles is that of a patient interacting with medical professionals at all levels, from first-year med students to physicians, nurses, and other medical personnel. Nearly all medical schools now employ patient/health care provider role-playing as a valuable communication and physical exam training technique.
Lauren has found that her qualitative skills, such as reading body language coupled with rapid-fire, in-the-moment, relevant, and ad-libbed response allow her to realistically portray the patient and then provide both written and oral feedback to the learner and to the training institution. This feedback can include direction on what helped her — as a patient — to feel cared for and respected, as well as more concrete feedback of multiple aspects of taking a complete history, asking relevant questions and follow-up probes and correctly executing the physical exam.
The feedback questionnaire can have as many at 40 different elements of evaluation. These must be rated based on the “patient’s” memory of the encounter which just took place and, as mentioned, the evaluation encompasses all aspects of communication from the time the learner enters to the time of exit.
A qualitative researcher has the ability to have many thought “balls” in the air at once, such as:
What is the respondent saying?
Does that answer the question I just asked? If not, is it a point I should explore?
Does it fit the client’s objectives for the research?
How am I doing on time?
It is these skills that exquisitely prepare medical professionals for this job.
Working with Underserved Populations
As a recently retired QRC, Barbara Rugen and her husband joined the Peace Corps and were sent for two years to the African country of Namibia.
“Never once did I think I would be called upon to use my qualitative background. To my surprise, I found that my skills could make a significant difference there.”
Barbara worked largely with the Nama, who constitute the marginalized communities of the south. The first thing she learned about the Nama was the disillusionment of foreign agencies that had tried to help them: “They just don’t care!” was a common complaint. The second realization was the local prejudice against them, particularly by the white Afrikaaners: “I don’t hire Nama. The Nama are too lazy.”
A small number of Nama were in positions of influence who wanted to uplift their people but were unsure how. Barbara conducted IDIs with the leaders and focus groups with the Nama people. The qualitative sessions explored Nama attitudes and behavior, and the research provided insights to help leaders frame recommendations for Nama capacity building and develop an action plan for the capacity building of these marginalized people.
If you are interested in learning more about this adventure in qualitative, you can hear an interview with Barbara on a VIEWS podcast at https://qrcaviews.org/2019/03/11/spring-podcast-using-qualitative-techniques-within-marginalized-populations/
Business Consulting and Talent Recruiting
My journey into the adjacent qualitative universe began with a small strategy project for a company I call a re-startup. The company had reorganized and was now on a growth path. The task at hand was where to start rebuilding the organization.
Enter strategic qualitative. We began with in-person depth interviews with members of the senior management team. From the knowledge gained, we recommended that the first personnel hole that needed to be plugged was that of a MarCom director. The client agreed, and then said, “find us one.”I looked around to see if they were talking to me. But then, I realized that many organizations, particularly those in startup mode, do not – in fact should not – have their key management people getting into the weeds of going through the hiring process.
We did find our client a suitable candidate for that position — if I do say so — she’s been there for nearly 3 years, and is doing a great job with a five-person department reporting to her. And we learned and developed a process that allows the supervisory/management team to do their primary jobs and still bring in the proper new talent.
Now, I admit my bias – and my client concurs with this view – that a primary reason the process works is that the foundation of the search is based on interviews treated as qualitative investigations. The nuances of the conversations also keep an ear on cues to the candidate’s compatibility with the culture of the organization, a very important aspect to a growing company.
While the three examples above illustrate later-career direction shifts, as we noted at the outset, qualitative expertise might offer new trajectories at any point in this rapidly changing research universe. I’d love to hear your thoughts!
Foster Winter is Managing Director of Sigma Research & Management Group. His experience as a business owner and researcher has contributed to his capabilities as a management and organizational consultant. Foster has served on the QRCA Board of Directors, co-chaired the Worldwide Qualitative Conference in Budapest and is the host of the QRCA VIEWS Conversations in Depth podcasts.
| Comments (0)
Posted By Tom Rich,
Tuesday, January 14, 2020
Looking Back – A Year of Change in the World of Qualitative
By: Tom Rich
At this time in the new year, it feels right to look back and see what seems to be different. It certainly has been a time of change in the world of qualitative research. I’ve spent the last couple of weeks communicating with some qualitative luminaries to get their perspectives on developing trends over the past year. Based on those conversations, I think a number of key developments are worthy of mention.
The Tool Bag is Growing
The continued expansion of the tools available for qualitative research is profoundly changing what qualitative is, and how it’s done. A dizzying array of tools and resources can be applied to qualitative. The online research platforms continue to grow in both features and sophistication, use of AI-based tools is growing, video editing software has become easier to use, and biometric tools continue to grow more affordable and user friendly. As a result, we can now provide insights to clients that are based upon more than mere conversation—we can bring multiple data sources to bear on our analysis.
The Lines are Blurring
As more tools become available, and as they become increasingly affordable and user friendly, clients are increasingly interested in mixed method studies. This doesn’t just mean qual/quant approaches; it’s no longer acceptable for qualitative practitioners to say, “I don’t do quant.” It means a greater merging of online and face-to-face approaches as well, as UX and more traditional qualitative studies are starting to look very similar.
Agility is the Word of the Moment
Clients are demanding compressed timetables. Practitioners are increasingly completing research in four or five business days. Also, clients’ priorities seem to be changing. Whereas the question used to be, “what will it take to get exactly the information we need?”, it now seems to be, “we have a week—what can we learn in that time?”. As a result, new approaches to recruiting and fieldwork are becoming more common.
Qualitative Approaches Are Becoming More Important to Understanding Big Data
Research users continue to see the value in using qualitative tools to understand all the data they have available. More and more seem to understand that there is a big difference between information and wisdom, and that actually talking to people–while it doesn’t yield quantifiable information–provides a level of detail and insight that can be acquired no other way.
If you feel like the ground is shifting beneath your feet, you’re not wrong. Whether you’re a researcher or a research user, staying current on new tools, techniques, and priorities is essential to survival and success.
I’d like to extend my thanks to these qualitative luminaries for their perspective on developing trends: David Bauer, Jim Kulevich, Abby Leafe, Joanna Patterson, Steve Schlesinger, and Manny Schrager.
Since founding his business, Thomas M. Rich & Associates, in 1996, Tom Rich has conducted thousands of focus groups, one-on-one interviews and online interactions for clients in nearly every industry. He boasts an extensive background in brand strategy, consumer behavior and shopper insights -- skills he developed while working for companies that include Backer & Spielvogel Advertising, Nabisco, Tambrands, and Unilever. This background gives Tom a unique skill set among qualitative practitioners and allows him to structure research and analysis around the tactical and strategic decisions that will be made as a result of the research. Tom holds a bachelor's degree in English from the University of Pennsylvania and a master's degree in business administration from the Amos Tuck School of Business Administration at Dartmouth College.
a year of change
| Comments (2)
Posted By Aimée Caffrey ,
Tuesday, December 10, 2019
Updated: Monday, December 9, 2019
Practical Messiness Masked by the Qualitative and Quantitative Distinction
By: Aimée Caffrey
This blog post discusses the practical messiness that can be masked by the qualitative/quantitative distinction and offers an approach for thinking about and dealing with that messiness.
Like many anthropologists, I have an abiding interest in the ways in which people construct and reproduce boundaries. During my doctoral work, my primary focus was on boundaries such as ethnicity, caste, and nationality. The professional path I have taken in more recent years has in part shifted my attention toward boundaries of another variety—the boundaries that demarcate scientific knowledge practices in industry, and toward a particular boundary with which the readers of this blog are already quite familiar—that between quantitative and qualitative. In my present role, I conduct and help support research that by most definitions would count as qualitative. At the same time, this work almost always feeds into, or follows on the heels of, research that by most definitions is quantitative. It might entail using IDIs, focus groups, or journaling exercises to better understand terminology or relevant dimensions of experience prior to writing a survey. At the other end of things, it might entail using these data collection formats in an effort to make sense of survey findings—when we have discovered the what but are uncertain of the why.
Working at this intersection instills a perhaps exaggerated awareness of, and sensitivity to, the risks of accepting the quant/qual boundary at face value. Like others of its type, this distinction is a productive shorthand for organizing and talking about a variety of practices; however, it can mask the messiness of reality. A very experienced industry researcher gestured toward this messiness on a recent L&E webinar when he remarked on the "under-powered quant" that can be at work when focus group moderators ask for a show of hands. Alternatively, consider that many of what are generally marketed as mobile ethnography or online qual tools often contain what we otherwise think of as quantitative question types (e.g., multiple choice). To offer another example, I regularly assist fellow researchers with the development of interview and focus group discussion guides, and often this assistance centers in part on rephrasing "how much" (i.e., quantitative) kinds of questions to help us make sure we are in fact collecting qualitative data.
These examples of the messiness relate to a tension between the method deployed and the data gathered. When we think of the boundary between qualitative and quantitative as pertaining to a (reporting) distinction between numbers and words, the lines are similarly blurred—we discover the use of stories and images to help explain the findings of quantitative analysis and the use of quantitative adjectives to convey insights from qualitative analysis. This isn't terribly surprising: If there is "terror in numbers," as Darrell Huff wrote in How to Lie with Statistics, the tensions and nuances at the very human heart of qualitative data can also induce discomfort. But, just as the pictures (e.g., graphs) we draw to quell the disquietude of quant can exaggerate the story that the numbers tell, so too can the words we use to describe our qualitative findings be misleading. What is more important than policing the qualitative/quantitative boundary? It is being watchful for what the messiness around that boundary might signal—that there is a misalignment somewhere among the objectives in mind, the method deployed, the data gathered, and ultimately, the claims that are made.
There may be justifiable and even good reasons to ask for a show of hands in a focus group—for example, as a quick "pulse check", or to help warm up participants at the start of the discussion. But whether we think of our work as quant or qual—and whether we are thinking of our questions, our methods, or our claims in making that determination—let's be deliberate and mindful about the implications of actively inviting that messiness into the picture.
Aimée Caffrey is a cultural anthropologist and UX researcher. Since 2017, she has worked in the Advanced Analytics Group at Bain & Company, where she collaborates with consultants, developers, designers, and fellow researchers to help clients solve some of today’s most exciting business challenges. If you wish to get in touch, please email her at Aimee.Caffrey@Bain.com.
| Comments (4)
Posted By Kunyi Mangalam, Mara Consulting,
Tuesday, November 26, 2019
Use a Listening Session Approach for Better Design & Innovation Research
This blog is intended to offer a high-level description of a qualitative data collection method called Listening Sessions that can yield deeper understanding of people’s reasoning, emotions, and guiding principles. This approach is particularly valuable for practitioners of design research, and for those who contribute to the innovation process.
This approach was developed by Indi Young, who invented Mental Model Diagrams (MMDs) when she was one of the founding members of Adaptive Path. Find her at IndiYoung.com. When I interned with Indi, I learned this approach as part of her methodology. Adopting this approach made me a better interviewer for the Discovery work I do in Service Design.
Research for Design – it’s Not Designing a Calorie Tracking App, it’s How to Look Good at a Reunion
Research for design and innovation is tasked with understanding how people make decisions as they progress toward achieving a purpose. The “thing” that’s being designed, like an app, is always part of a larger goal. For example, I want to keep track of my calories by using a tracker on my phone. The thing being designed is the tracking app. But my larger purpose is to look good at a reunion. The app will help me in that goal.
For innovation, it’s not just about one thing (the app), it’s also about what else the company can do to support me in trying to look good at my reunion. Needs will be surfaced that the company can decide whether they want to pursue supporting. Revealing these needs, figuring out how to support them in a competitive way, and commercializing them is fundamental to innovation.
Change Your Mindset from Interviewing to Listening
Conducting qualitative research for design and innovation requires a shift in mind-set. It requires that you “listen” rather than “interview.” You may be thinking, “I have been listening to people for my entire career — I listen for a living!” That’s how I felt, too. Then I learned that an Interview is to a Listening Session as Moderating is to Facilitating. They look similar, but the intent, the process, and the outcomes are different.
There Are Two Critical Differences between an Interview and a Listening Session
First, Listening Sessions belong to Problem Space research; Interviewing tends to belong to Solution Space research.
Problem Space research is concerned with how a person (not a user) thinks and reasons their way to achieve a goal. It is disconnected from a particular company, brand, product, or service. Problem Space research is foundational and can be used to fuel many solutions.
An easy way to think about Problem Space research is that the research focus would be just as relevant to someone your grandparent’s age as it would be to your grandchild’s. Examples include: How do you groom yourself for an important day at work? How do you decide to attend a performance? How do you prepare for a good night’s sleep? How do you make yourself look good to see people at an important get-together?
Solution Space research involves speaking to people about their relationship or an experience with a product, service, or brand — i.e., the solution. The words “users”, “members,” “customers,” and “employees” imply a relationship with a “solution.” Product development, marketing communications strategy and tactics, brand positioning, customer experience mapping, packaging, user experience, and content development are all examples of solution space research.
The table below summarizes the difference between Problem Space and Solution Space research.
The second difference is that you listen for and nudge people to reveal what is underneath a preference, opinion, explanation, or description. Compare this to a Solution Space IDI where we are often interviewing for Perceptions, Opinions, Behaviours, and Attitudes (shout out to Naomi Henderson for her acronym POBA) around a brand, a product, etc. In the Problem Space, when POBAs are articulated, we take them as our cue to nudge people further into their thinking, their feelings, or the “code” they live by to identify what is underneath.
These two differences make an enormous difference in the approach between being a Listener and being an Interviewer.
Difference 1: There is no Guide
In a Listening Session, there is no Interview Guide. The conversation begins with the study scope question, like, “Tell me about how you made sure you looked your best for your reunion…” And it continues from there. In this example, the participant may or may not have used a calorie counter app. They may have used one from a competitor. (Note, they would have been recruited such that they prepared to look their best for their reunion.)
In place of a guide, there is a disciplined ear; the Listener nudges the participant when they hear more surface descriptions. The box to the left summarizes different types of surface descriptions that need nudging to get more depth. The box to the right provides some examples of question stems that will redirect participants to reveal their thinking, reasoning, and emotions.
Difference 2: The Conversation Is Free from Externally Introduced Topics
In many guides, there are questions that we — and our clients — want answered, like reactions and opinions about things participants have not brought up. In a Listening Session, nothing is introduced or queried that hasn’t already been mentioned.
Difference 3: Outputs Are the Starting Point for Innovation and Design
In the Solution space, research outputs are usually an “answer” of sorts: which product package should be produced? Which creative delivered the message most compellingly? Which call to action content resulted in the most conversion?
In the Problem Space, Listening Sessions identify people’s needs as they progress toward a goal or purpose. These needs are the starting point in the organization’s quest to figure out solutions (services, products, experiences) that better support people. Examples include: a website that is more reflective of their needs, a calorie tracking app that corresponds more closely to their purpose of looking good for others.
In other words, while qual in the Solution Space tends to supply the “answers” to problems, qual in the Problem Space tends to supply the “questions” that spur a company to explore one or more directions.
Try a Few Question Stems and See Where it Leads
Integrate a few question stems into your conversation; when a participant say,s “Mostly we go to movies on Wednesday night…”, ask, “How did you figure out that works best for you?” When someone describes a statement of fact — like a describing a scene — ask, “what’s going through your mind in that scenario?”, to get them back to their own thinking and feelings.
For more than 30 years, Kunyi has helped organizations deeply understand the people they wish to serve and assist them in using this understanding to make decisions and move forward with more certainty and less risk.
She is a senior consultant at Mara Consulting, working to help organizations improve service delivery through technology, privacy & security, business consulting, and human centered design.
Linked In: linkedin.com/in/kunyi
| Comments (1)
Posted By Mary Sorber ,
Tuesday, November 12, 2019
"Selling" the Added Value of Qualitative UX Research
Quallies who want to expand their practice into UX research need to be aware of the different types of UX research and the various terminology used in UX field. Practicing UX researchers (UXRs), especially those not lucky enough to work within a company with a well-established UX research practice, should be adept at discussing how qualitative research adds value. This article is written for anyone who need to “sell” the benefits of UX research, whether to generate new business or convince internal colleagues.
How Qualitative UX Research Adds Value
As independent consultants, you probably already have a sales pitch in your back pocket describing the value of qualitative insights to the marketing organization. In UX research, you’ll be working most closely with product management and engineering. Talking to this different set of stakeholders means tweaking the terminology and emphasis. In working with product teams, the language that resonates is that of “bringing the outside-in perspective, mitigation of risk, and efficient use of valuable engineering resources.”
Three Categories of UX Research
UX research falls into 3 broad categories: exploratory, conceptual, and evaluative. Each contributes differently to the product and mitigates different risks. Exploratory — or generative — UX research (field visits, ethnography) addresses product-market fit and reduces the risk of building the wrong product for the wrong people. Conceptual UXR (iterative design research, usually with prototypes) reduces the risk of building the product in the wrong way and minimizes internal bias. Evaluative UXR (primarily usability) confirms user goals are met and reduces user adoption risk.
Disruptive vs. Incremental Research
It’s a mistake to think of UX research as synonymous with usability testing. UX research encompasses much more, and significant contributions come from other methods. Usability testing — as an evaluative method — can contribute only incremental improvements to the product. The engineering team is a fast-moving train on tracks that have already been laid. Usability testing may be able to paint the coach a new color, but it’s too late to route the tracks to a different destination. Disruptive innovation and product-market fit come from doing exploratory research in advance of the engineering effort.
Working as a UXR, your role will be to bring in the outside perspective, helping to develop and maintain focus on the user problem and delivering value to the user. This is sometimes a large effort in the face of pervasive and — at times — strongly held opinions about the perceived problem or the imagined value of a visionary solution. Developing a strong partnership with entrepreneurs and product leaders is key to grounding the vision and shaping it into something that will be successful in the world of the eventual users.
Being adept at explaining the different types of UX research and how they can add value to the product team will help you sell your UX services. And in terms of impact, focus first on exploratory methods, then conceptual methods. Usability testing may be a good way to get in the door, but only as a last resort as your impact on the project will be severely constrained.
Mary Sorber is Founder and Principal Researcher at Practical Insights, a boutique qualitative research company engineered to ask the right questions to get answers and insights that are a springboard for innovation and improved user experience. We are happy to partner with quallies looking to break into the UX field.
qualitative ux research
types of research
| Comments (2)
Posted By Regina Szyszkiewicz, MA,
Tuesday, October 29, 2019
Meditation and the Art of Moderating
Handling Tough Moments
Imagine you are seated at the head of a conference table and you open a focus group discussion asking participants how they feel about their health insurance plan. And the first person who responds says she does not like her plan because she received poor care that led to amputation of her leg. And imagine you still have 90 minutes to go and you don’t want the group’s energy or the discussion to get derailed. What do you do?
This actually happened to me about 15 years ago. I was personally shocked and momentarily caught off-guard by the comment. But I managed to connect with the participant, acknowledging what she had just shared. I proceeded to uncover what others had to say — which included more typical responses of being happy or unhappy due to cost or access.
In the end, the group was successful and the client obtained the desired insights tied to the research objectives. I felt fortunate, because I knew it could have gone a different way.
We Are only Human
As moderators, we have to keep our cool. There are many things we need to juggle during a live focus group or interview session. We need to multi-task: asking the questions, managing participation, keeping track of research objectives, watching the time, etc.
During sessions, we never know what will come. Things might not go as planned: participants may arrive late, first-time client observers may want to add questions that are off-topic to a discussion guide that is already packed, someone may become ill in the group, etc. We need to be able to respond mindfully and wisely.
To ensure participants feel safe, secure, and comfortable, we personally also need to feel the same. When we have an off day, we need to find a way to get our energy centered and grounded.
Meditation to the Rescue
There are things we can do to become present with ourselves so that we can be present with others. Having a self-connecting routine or ritual prior to beginning an interview or focus group is a good start.
I find daily meditation and yoga practice to be an invaluable self-connecting training. I can more easily find the mental and emotional space to make wise choices in the moment. I am also better able to have compassion for others and myself.
In fact, I have found meditation and yoga to be so beneficial in my life and career that I became a certified yoga instructor 15 years ago! (I now teach a community yoga class on most Saturdays as a volunteer.)
Some self-connecting tools to explore:
Regina Szyszkiewicz, MA, of Ten People Talking loves qualitative research. She is a master moderator who has conducted over a thousand qualitative sessions. Regina has deep experience in both in-person and online qualitative methods. Regina received her MA from the University of Illinois in Applied Sociology / Market Research. She served on QRCA’s board from 2016-2018 and currently serves as a co-chair of QRCA’s Online Special Interest Group.
| Comments (10)
Posted By Katrina Noelle,
Tuesday, October 15, 2019
Use the CDJ Framework to Innovate Methods
Innovate your tools and methods by going on your own customer journey; become a customer on a journey through your own methods! In order to keep qualitative insight “approaches to understanding customers” fresh and relevant, you should consider, evaluate, buy, enjoy, advocate, and bond with the methods you use to understand consumers.
The customer decision journey (CDJ) is a model that shows how customers complete a purchase, guiding marketers where and what they should do along the way. Borrow this approach to go on your own journey to develop and choose new tools, techniques, and methodologies.
The journey begins with the consumer’s top-of-mind consideration set – just like consumers do.
- Start by considering your needs. Why are you choosing to iterate an existing method or start offering a new one?
- Brainstorm with your team. Where are opportunities for improvement? What do team members want to try/experiment with?
- Make a list of all the contenders. Then walk through each of them, asking yourself:
- Is it answering a need? Filling a gap?
- Is it giving your team something new or unique?
- Can you explain succinctly the value proposition and point of difference as though you were in an elevator with a prospective client?
- Is anyone else doing it? Who? How? Could you offer it differently?
- Test your ideas. While you do so, be sure to constantly ask for feedback from your team, participants, and clients.
- Track iterations and updates. Chronicle changes made to the approach at every step because your ideas may morph, combine, or improve as you progress.
- Be open. Keep an open mind to the changes/modifications/new ideas along the way.
- Keep asking. Constantly query if the new/improved method is filling a need. Is it improving an older process or adding something new?
3. BUY OR CHOOSE
We’ve included “choose” in this traditional third step because when choosing a methodology, it’s often just that – a choice, a decision to move forward in a certain way – not a purchase.
- Note: this step is sometimes overlooked. After all this work, it’s hard to say “no” to an idea to which you’ve grown close. But keep in mind that rolling it out is an even bigger step than testing it.
- If you decide NOT to move forward, table it in a helpful way. Make note of learnings that could be used in a different format or could serve another purpose at some other time.
4. ENTER THE LOYALTY LOOP: Enjoy, Advocate and Bond
Take a moment to ENJOY your hard work; now is the time to advocate your development with your broader organization and with clients. Try a pilot test with an understanding client or ADVOCATE the approach within your organization!
- Ground everyone. To do so, establish with everyone a need you are trying to meet, the gap you are trying to fill, and/or your rationale for adding this approach.
- Bond. Bonding in this sense means that the team gets familiarized with the new approach and comes to see it as their own. Solicit feedback from participants about their experience. Ask clients how they are using the new approaches and what could be improved further.
- Engage. Ensure your team are staying engaged, enjoying the experience, and getting the most out of the new methods as they are a part of the continual evolution.
This post was inspired by a presentation entitled “Innovate Your Tools And Methods By Going On Your Own Customer Journey” at the CX Talks event in Chicago held on September 24, 2019.
Katrina is principal of KNow Research, a full-service insights consultancy specializing in designing custom qualitative insights projects for 16+ years to unlock insights about brands and target audiences. She is also co-founder of Scoot Insights, whose trademarked ScootTM Sprint approach helps decision-makers choose the right direction.
President, KNow Research, Co-Founder Scoot Insights
www.knowresearch.com / www.scootinsights.com
@kat_noelle / https://www.linkedin.com/in/katrinanoelle/
Customer Journey Maps
| Comments (0)
Posted By Mark Wheeler,
Wednesday, October 2, 2019
Photo by Jo Szczepanska on Unsplash
A book published earlier this year provides a nice toolkit for qualitative researchers and consultants looking for new ways to bring additional value to our work. Super Thinking, by authors Gabriel Weinberg and Lauren McCann, introduces and explains a large number of mental models that can be applied as tools to help us do our research and communicate our findings and recommendations with more depth and impact.
A mental model is essentially a recurring concept that can be used to help understand, explain, and predict things. They are used as shortcuts to higher-level thinking. Most mental models have solid supporting evidence behind them but are not extremely well-known or formally taught to everyone in school.
Because most mental models are intuitive, they can be quickly explained to others, and used to recognize and describe patterns in behavior. They are highly valuable in qualitative research because we continually observe and hear things that need to be communicated to our clients – sometimes in ways that help to give a higher level of explanation than we have heard. It is much easier for us to recognize and explain something if we have a solid label for what is going on.
There are literally hundreds of mental models in the book. They come from a wide and varied number of fields of study, including philosophy, investing, statistics, physics and physical science, and economics. A list of several mental models is included in the accompanying table.
Applying Models in Research
In a recent marketing research project, I found a way to make use of one of Weisberg’s and McCann’s mental models to help communicate a key point to clients during a long day of in-person research. (Note: there will be a lot of detail blinding in this example to ensure confidentiality.) The research was in support of a safer kind of post-surgical wound care that had been on the market for a few years. Some of the doctors in research claimed that they hadn’t noticed fewer post-surgical complications since switching to the safer alternative, and some thought they may have seen even more complications. This was causing (and I am being understated here) some confusion and concern in the back room. Fortunately, the situation brought to mind the mental model of a moral hazard. Put simply, people take on more risk when they have information (in this case, the knowledge about the new wound-care therapy) that encourages them to believe that they are being protected.
Discussion with clients about moral hazard helped us to put a label on what we were hearing and helped us understand and probe differently in later interviews. Even more important, we were ultimately able to use the learnings to generate new messaging about the wound care product to address the potential problem of moral hazard for both physicians and patients.
A lot of the useful mental models in the book come from the social and behavioral sciences. The concept of availability bias describes the fact that once we make an answer (or behavior) available in someone’s mind by drawing attention to it, the answer begins to seem more correct. It is an automatic effect and is nearly impossible to resist. Of course, we usually want to avoid availability bias when we moderate (i.e., no leading questions).
I often discuss this idea of availability bias with clients when writing guides or surveys, and the reaction is overwhelmingly positive – even when it leads to re-writing someone else’s question. Availability bias comes up in other situations, for example when composing messages for promotion. In these cases, the bias can become a bit more acceptable (e.g., “Doctor, tell me about how satisfied your patients have been after you have prescribed our drug?”).
The larger point behind these examples is that introducing clients to mental models such as moral hazard and availability bias helps to communicate relatively complex points in a simple way that wouldn’t be possible without using the terms. When discussing a particular mental model such as loss aversion before research, clients and other listeners then begin to recognize it when they hear it from respondents. It is also fair to think of mental models as “value-adds” for any moderators or consultants who are able to bring in new concepts to help their client achieve their objectives. I’ve found that introducing mental models relatively early in reports can help prepare clients for critical upcoming findings and conclusions.
It is well worth while to check out Super Thinking and discover which mental models can be most valuable to your business.
Mark A. Wheeler, PhD, is a qualitative researcher and consultant who applies his background in cognitive and behavioral science to help his clients achieve their goals. He is Principal of Wheeler Research LLC in Bryn Mawr, Pennsylvania.
| Comments (4)
Posted By Jennifer Dale,
Tuesday, September 17, 2019
Qual or Quant? Choosing the best method for your research study
Quantitative and qualitative research are both scientific methods for data collection and analysis. They can be applied alone, or in combination, to maximize insights.
The Basic Difference: Going Beyond What vs. Why
Quantitative research relies on large sample sizes to collect numerical data that can be mathematically analyzed for statistically significantfindings. Surveys are structured, questions are typically closed-ended, and answer choices are fixed. However, quantitative research may also include a limited number of short-answer open-ended questions to help clarify why people responded the way they did to a closed-ended question. Eye tracking, facial coding, and even Big Data fall under the umbrella of quantitative research, with computers analyzing enormous volumes of data incredibly fast.
Quantitative studies produce numerical data, which allows for statistical analysis and ultimately precise findings. The US Census is a great example of a quantitative research study – fixed and close-ended questions, an enormous sample size, a collective review of many respondents, and measured population segments.
In contrast, qualitative research seeks to understand the reasons behind the numbers, as well as what is not yet known. Sample sizes are smaller, questions are unstructured, and results more subjective. Unlike quantitative research, qualitative studies insert the researcher into the data collection process. The researcher probes responses and participants provide more detail. Qualitative data is collected through interviews, group discussions, diaries, personal observations, and a variety of other creative and ever-expanding means.
Qual studies work with textual and visual data, interpreted and analyzed for directional findings. Qualitative research studies include fluid and open-ended questions, a smaller sample size, an in-depth review of each respondent, and emerging themes.
I like to think of the difference visually, where a quant study collects specific data from a large number of people, and a qual study goes deeper to collect greater insights from a small number of people.
How to Choose
The answer to whether you proceed with quantitative or qualitative research lies in your research objective and available resources.
- Why you’re doing the research
- What you need to know
- Your budget, staff, + schedule
- How the findings will be used
Consider these possible scenarios the next time you’re stuck and don’t know which way to go:
Quant + qual can come together in other ways. A questionnaire with open-ended questions, while ultimately coded numerically, can offer a window into the unknown. Focus groups that also include poll questions or surveys can produce hard data when analyzed in total, even if the results are not statistically significant.
With good planning, quantitative and qualitative research come together like a dance, guiding the marketer’s success with every step.
I Say Hybrid, You Say Multimethod
Combining quantitative and qualitative research approaches is an ancient strategy, but the names continue to change with the times. I did a bit of research and found the following terms being used to describe that ideal combination of quantitative and qualitative research. What term do you use? And why? ;)
Jennifer Dale, President + CEO Inside Heads, is a seasoned marketing professional and pioneer in online market research. Her passion for marketing, human behavior, and technology keep InsideHeads on the short list of research providers for some of the world’s most discriminating clients. Jennifer is co-author of Qual-Online, The Essential Guide and has published a number of articles in VIEWS, Alert! and Quirk’s Marketing Research Review.
| Comments (3)