Posted By Arilene Hernandez, Independent Consultant/Behavioral Health Clinician,
Thursday, March 5, 2020
Updated: Tuesday, March 3, 2020
Annual Conference Reporter on the Scene:
Wise Ways to Go Forward with Humanity
Presenter: Naomi Henderson, RIVA Market Research & Training Institute
Summary of Conference Session
The 2020 QRCA Annual Conference gave all who attended the chance to hear from a plethora of talented and respected speakers, including a bonus keynote, the qualitative superstar herself, Naomi Henderson!
During her closing keynote presentation, “Wise Ways to Go Forward with Humanity”, Naomi gave a look into the story of her birth, a foreshadowing of the uniqueness she was to embody for the rest of her life. This uniqueness bleeds into her work today and led her down the path of training researchers in the art and science of rigorous qualitative research techniques. During her presentation Naomi identified for the audience the four qualities that distinguish qualitative researchers. The main one being that “we are inspired to use those things that make us human to be the translators for those who are deaf to the voice of the consumer.”
Naomi’s metaphor of how the back of the hand and the palm of the hand represents quant and qual research, respectively, was a fascinating take on how the two worlds of research interact and how qualitative research is so important for clients to understand their consumers.
Naomi’s presentation was a reminder that being human and connecting with other humans is what facilitates great qual research. She inspired the audience to continue to be creative, passionate and embrace humor.
QRCA Reporter on the Scene:
Arilene Hernandez, Independent Consultant/Behavioral Health Clinician
QRCA Annual Conference
QRCA Reporter on the Scene
| Comments (0)
Posted By Gayle Moberg,
Thursday, February 27, 2020
Updated: Thursday, February 27, 2020
Annual Conference Reporter on the Scene: Rise of the
Robots, Chatbots, Humans!
Presenter: Kristin Luck, ScaleHouse and Women in Research (WIRe)
As the opening keynote speaker of the 2020 QRCA Annual Conference where the theme was “Keep Qual Human”, many in attendance were excited to hear how we can marry advances in technology with our qualitative practices and instincts. Throughout the session, Kristin focused on what we've done right as researchers and more importantly, what we can do better as our industry continues to change and grow.
Above all, Kristin highlighted that we need to be researchers (not just quallies); observers, not just questioners; consultants/synthesizers, not just analysts; storytellers, not just fact reporters; we need to engage with our respondents as humans, not robots.
Key Session Takeaways
I had so many takeaways from Kristin’s keynote talk the first being that we should all identify as "Researchers," not "Qual" or "Quant." We need to continue to grow our practice which means avoiding older labels and instead focusing on what we all do, research.
Another impactful takeaway from Kristin’s session was that the market research industry is bigger than cloud services, coffee, and digital music. As Kristin put it, "data is the new oil." This of course now means that “traditional" research is now threatened by falling between clients' faster/lower-cost DIY research and consulting firms' ability to move from [tactical] data collection to [strategic] holistic, synthesized recommendations, which resonate with C-suites. Since the C-suite has changed, we must to as researchers. It’s time to become Data Translator/Synthesizers, not just Data Collector/Analyzers and put on our "Research Mullets", Business in front, Party in the back!
Kristin’s quote: "Data is the new oil” was EYE-OPENING! That's a BIG deal, it really conveys the value of what we do and offer.
Final Comments and Takeaways
We must stop pigeon-holing ourselves as Qual researchers! It’s time to think of ourselves as researchers, storytellers, synthesizers, observers, strategists who engage respondents as HUMANS on their terms, not ours. Let's all go forward to work as humans above all else!
QRCA Reporter on the Scene: Gayle Moberg, GDM Marketing Solutions
QRCA Annual Conference
QRCA Reporter on the Scene
| Comments (0)
Posted By Ted Kendall,
Tuesday, February 11, 2020
Updated: Tuesday, February 11, 2020
Adapting Your Listening Skills to the Online World
By: Ted Kendall
As a successful qually, you intuitively know the importance of listening, how to listen well, and how to show participants that you are listening.
Listening is important because it engenders trust, creates rapport, and opens participants up.
In a physical setting, the key things we do to listen, and to show we are listening, include:
- Asking questions in response to participant’s thoughts
- Using verbal and non-verbal cues to show how you are listening
- Letting participants complete their own sentences
- Maintaining eye contact
- Acknowledging comments in specific ways like boarding or post-it notes
You will have noticed that most involve physicality—you have to be there in real life.
So, how do you listen, and just as importantly, show you are listening, in online qual?
Before we get into this, let me clarify that when I am talking about online qual in this context, I am referring to text-based online qual—primarily bulletin board style. While webcam interviews may be considered online, real life listening skills can be applied to the medium fairly easily.
Set Expectations to Counter Online Research Misperceptions
A unique challenge with online qual is that participants don’t necessarily know the difference between a survey and a qualitative discussion, so they often treat the study as if it were a survey. And they often believe that any interactions will be with a chatbot, not a real person.
It’s critical to counter these widely held beliefs and set the appropriate expectations up front. Tell participants you are listening to what they will say. And let them know it’s not a survey—it’s a conversation.
I can sometimes be pretty blunt about this—even going so far as to tell participants that if they just speed through the answers to my questions, they will not get the incentive. And then, when someone does that, I follow through on the promise and call them on it. Often it changes their interactions. Sometimes it doesn’t. But they definitely know you are listening. And, if the discussion is open to the whole group, others will see that you are listening as well.
Depending on the platform, you can use the messaging tools as well as the landing pages to accomplish this. And if the tools aren’t there, just use email or text, even phone, outside of the application.
I also make it a habit to reply to every participant post in the introductions—much like I do in a traditional focus group setting, or for that matter, in a conversation with a stranger. These replies can often reflect common ground, ” I love spending the day in the mountains with my dog too. What kind of dog do you have?” That’s not a question that will provide rich insights, but it will help open up the participant and really shows you are listening.
It’s critical to establish early in the conversation that you are a living, breathing, listening human being—not some chatbot or AI ghost in the machine. This has a huge impact on how participants approach your conversation.
Avoid AI Tools
Several online platform providers are touting AI generated responses to participants. All I can say is that this is what we get when we let the programmers drive development. Avoid this feature. Yes, it saves you time during the discussion. But it also removes you from the conversation—you are no longer actively listening. You wouldn’t let a robot take over your focus group session just to save time, would you?
Also, AI is not yet perfect. And it needs to be in this case. It’s not a life or death situation, unless you consider the life or death of the research conversation. Even if the AI gets 90% of the interactions correct, there is that 10% that will suck the air right out of your conversation with that participant. If you are using a group setting, other participants will see the mistake and the negative impact becomes exponential.
So just don’t do it. The potential losses greatly outweigh the potential time savings. Besides, actually responding manually forces you to listen and learn—which is what this is all about. Don’t let a robot take your job.
How to Digitally Use “Non-verbal” Cues and Maintain Eye Contact
In the online, text-based world, you certainly can’t maintain eye contact, nor can you provide non-verbal cues to show you are listening. So how do you employ those key principles of listening in an online, text-based world?
Probably the most obvious way is replying to participants’ posts with questions to better understand what they have said or get some clarification on their comment. Yes, I am talking about the same probing questions we lay on participants in focus groups and interviews. These probing questions work just as well online as they do in real life.
To replace those non-verbal cues, I have found it quite effective to comment or ask questions even when there is no need to do so. The idea is that by just saying something, participants recognize that you are there and you are reading what they are posting—you are listening.
Sometimes it is easy to just copy and paste the same general comment to several participants when you do this. If the participants can’t see one another, this is fine and saves you time. But if the participants can see each other, then it just makes you look like a robot.
It’s important when making comments just to show yourself to not require a reply—often this is an option. I like to just thank people for providing quality detail or thank them for an interesting take on the topic. The important thing is to personalize it a bit, to keep it from sounding generic.
Another way to show you are listening is to use the messaging app within the platform to hold meta conversations outside the actual discussion. I make it a point to send reminders at set times as well as thank-yous at the end of the day of discussion.
These messages don’t have to be just logistical in nature. You can also use them to show you are listening. Sometimes I will include a comment about some of the discussion—an insight that came through for the whole group of participants, or sometimes personalizing it to a specific participant.
In the end, listening is important to successful qual, whether you are in the same room as the participant or interacting digitally. It’s just how you listen, and how you show that you are listening, that can take a little adjustment in the digital qual world. But it’s no less important and no less doable.
Ted Kendall is the founder of TripleScoop, a boutique research agency that has a focus on online qualitative. Ted got to this place in his career by being in the right place at the right time to pioneer in early online methods. He was a co-founder of QualTalk that became 20/20 Research’s QualBoards. He learned how to moderate online qual through trial and error and has moderated hundreds of online qual discussions and interviews since that first one in 1997. And he is usually a good listener.
| Comments (0)
Posted By Tom Rich,
Tuesday, January 14, 2020
Looking Back – A Year of Change in the World of Qualitative
By: Tom Rich
At this time in the new year, it feels right to look back and see what seems to be different. It certainly has been a time of change in the world of qualitative research. I’ve spent the last couple of weeks communicating with some qualitative luminaries to get their perspectives on developing trends over the past year. Based on those conversations, I think a number of key developments are worthy of mention.
The Tool Bag is Growing
The continued expansion of the tools available for qualitative research is profoundly changing what qualitative is, and how it’s done. A dizzying array of tools and resources can be applied to qualitative. The online research platforms continue to grow in both features and sophistication, use of AI-based tools is growing, video editing software has become easier to use, and biometric tools continue to grow more affordable and user friendly. As a result, we can now provide insights to clients that are based upon more than mere conversation—we can bring multiple data sources to bear on our analysis.
The Lines are Blurring
As more tools become available, and as they become increasingly affordable and user friendly, clients are increasingly interested in mixed method studies. This doesn’t just mean qual/quant approaches; it’s no longer acceptable for qualitative practitioners to say, “I don’t do quant.” It means a greater merging of online and face-to-face approaches as well, as UX and more traditional qualitative studies are starting to look very similar.
Agility is the Word of the Moment
Clients are demanding compressed timetables. Practitioners are increasingly completing research in four or five business days. Also, clients’ priorities seem to be changing. Whereas the question used to be, “what will it take to get exactly the information we need?”, it now seems to be, “we have a week—what can we learn in that time?”. As a result, new approaches to recruiting and fieldwork are becoming more common.
Qualitative Approaches Are Becoming More Important to Understanding Big Data
Research users continue to see the value in using qualitative tools to understand all the data they have available. More and more seem to understand that there is a big difference between information and wisdom, and that actually talking to people–while it doesn’t yield quantifiable information–provides a level of detail and insight that can be acquired no other way.
If you feel like the ground is shifting beneath your feet, you’re not wrong. Whether you’re a researcher or a research user, staying current on new tools, techniques, and priorities is essential to survival and success.
I’d like to extend my thanks to these qualitative luminaries for their perspective on developing trends: David Bauer, Jim Kulevich, Abby Leafe, Joanna Patterson, Steve Schlesinger, and Manny Schrager.
Since founding his business, Thomas M. Rich & Associates, in 1996, Tom Rich has conducted thousands of focus groups, one-on-one interviews and online interactions for clients in nearly every industry. He boasts an extensive background in brand strategy, consumer behavior and shopper insights -- skills he developed while working for companies that include Backer & Spielvogel Advertising, Nabisco, Tambrands, and Unilever. This background gives Tom a unique skill set among qualitative practitioners and allows him to structure research and analysis around the tactical and strategic decisions that will be made as a result of the research. Tom holds a bachelor's degree in English from the University of Pennsylvania and a master's degree in business administration from the Amos Tuck School of Business Administration at Dartmouth College.
a year of change
| Comments (2)
Posted By Aimée Caffrey ,
Tuesday, December 10, 2019
Updated: Monday, December 9, 2019
Practical Messiness Masked by the Qualitative and Quantitative Distinction
By: Aimée Caffrey
This blog post discusses the practical messiness that can be masked by the qualitative/quantitative distinction and offers an approach for thinking about and dealing with that messiness.
Like many anthropologists, I have an abiding interest in the ways in which people construct and reproduce boundaries. During my doctoral work, my primary focus was on boundaries such as ethnicity, caste, and nationality. The professional path I have taken in more recent years has in part shifted my attention toward boundaries of another variety—the boundaries that demarcate scientific knowledge practices in industry, and toward a particular boundary with which the readers of this blog are already quite familiar—that between quantitative and qualitative. In my present role, I conduct and help support research that by most definitions would count as qualitative. At the same time, this work almost always feeds into, or follows on the heels of, research that by most definitions is quantitative. It might entail using IDIs, focus groups, or journaling exercises to better understand terminology or relevant dimensions of experience prior to writing a survey. At the other end of things, it might entail using these data collection formats in an effort to make sense of survey findings—when we have discovered the what but are uncertain of the why.
Working at this intersection instills a perhaps exaggerated awareness of, and sensitivity to, the risks of accepting the quant/qual boundary at face value. Like others of its type, this distinction is a productive shorthand for organizing and talking about a variety of practices; however, it can mask the messiness of reality. A very experienced industry researcher gestured toward this messiness on a recent L&E webinar when he remarked on the "under-powered quant" that can be at work when focus group moderators ask for a show of hands. Alternatively, consider that many of what are generally marketed as mobile ethnography or online qual tools often contain what we otherwise think of as quantitative question types (e.g., multiple choice). To offer another example, I regularly assist fellow researchers with the development of interview and focus group discussion guides, and often this assistance centers in part on rephrasing "how much" (i.e., quantitative) kinds of questions to help us make sure we are in fact collecting qualitative data.
These examples of the messiness relate to a tension between the method deployed and the data gathered. When we think of the boundary between qualitative and quantitative as pertaining to a (reporting) distinction between numbers and words, the lines are similarly blurred—we discover the use of stories and images to help explain the findings of quantitative analysis and the use of quantitative adjectives to convey insights from qualitative analysis. This isn't terribly surprising: If there is "terror in numbers," as Darrell Huff wrote in How to Lie with Statistics, the tensions and nuances at the very human heart of qualitative data can also induce discomfort. But, just as the pictures (e.g., graphs) we draw to quell the disquietude of quant can exaggerate the story that the numbers tell, so too can the words we use to describe our qualitative findings be misleading. What is more important than policing the qualitative/quantitative boundary? It is being watchful for what the messiness around that boundary might signal—that there is a misalignment somewhere among the objectives in mind, the method deployed, the data gathered, and ultimately, the claims that are made.
There may be justifiable and even good reasons to ask for a show of hands in a focus group—for example, as a quick "pulse check", or to help warm up participants at the start of the discussion. But whether we think of our work as quant or qual—and whether we are thinking of our questions, our methods, or our claims in making that determination—let's be deliberate and mindful about the implications of actively inviting that messiness into the picture.
Aimée Caffrey is a cultural anthropologist and UX researcher. Since 2017, she has worked in the Advanced Analytics Group at Bain & Company, where she collaborates with consultants, developers, designers, and fellow researchers to help clients solve some of today’s most exciting business challenges. If you wish to get in touch, please email her at Aimee.Caffrey@Bain.com.
| Comments (4)
Posted By Kunyi Mangalam, Mara Consulting,
Tuesday, November 26, 2019
Use a Listening Session Approach for Better Design & Innovation Research
This blog is intended to offer a high-level description of a qualitative data collection method called Listening Sessions that can yield deeper understanding of people’s reasoning, emotions, and guiding principles. This approach is particularly valuable for practitioners of design research, and for those who contribute to the innovation process.
This approach was developed by Indi Young, who invented Mental Model Diagrams (MMDs) when she was one of the founding members of Adaptive Path. Find her at IndiYoung.com. When I interned with Indi, I learned this approach as part of her methodology. Adopting this approach made me a better interviewer for the Discovery work I do in Service Design.
Research for Design – it’s Not Designing a Calorie Tracking App, it’s How to Look Good at a Reunion
Research for design and innovation is tasked with understanding how people make decisions as they progress toward achieving a purpose. The “thing” that’s being designed, like an app, is always part of a larger goal. For example, I want to keep track of my calories by using a tracker on my phone. The thing being designed is the tracking app. But my larger purpose is to look good at a reunion. The app will help me in that goal.
For innovation, it’s not just about one thing (the app), it’s also about what else the company can do to support me in trying to look good at my reunion. Needs will be surfaced that the company can decide whether they want to pursue supporting. Revealing these needs, figuring out how to support them in a competitive way, and commercializing them is fundamental to innovation.
Change Your Mindset from Interviewing to Listening
Conducting qualitative research for design and innovation requires a shift in mind-set. It requires that you “listen” rather than “interview.” You may be thinking, “I have been listening to people for my entire career — I listen for a living!” That’s how I felt, too. Then I learned that an Interview is to a Listening Session as Moderating is to Facilitating. They look similar, but the intent, the process, and the outcomes are different.
There Are Two Critical Differences between an Interview and a Listening Session
First, Listening Sessions belong to Problem Space research; Interviewing tends to belong to Solution Space research.
Problem Space research is concerned with how a person (not a user) thinks and reasons their way to achieve a goal. It is disconnected from a particular company, brand, product, or service. Problem Space research is foundational and can be used to fuel many solutions.
An easy way to think about Problem Space research is that the research focus would be just as relevant to someone your grandparent’s age as it would be to your grandchild’s. Examples include: How do you groom yourself for an important day at work? How do you decide to attend a performance? How do you prepare for a good night’s sleep? How do you make yourself look good to see people at an important get-together?
Solution Space research involves speaking to people about their relationship or an experience with a product, service, or brand — i.e., the solution. The words “users”, “members,” “customers,” and “employees” imply a relationship with a “solution.” Product development, marketing communications strategy and tactics, brand positioning, customer experience mapping, packaging, user experience, and content development are all examples of solution space research.
The table below summarizes the difference between Problem Space and Solution Space research.
The second difference is that you listen for and nudge people to reveal what is underneath a preference, opinion, explanation, or description. Compare this to a Solution Space IDI where we are often interviewing for Perceptions, Opinions, Behaviours, and Attitudes (shout out to Naomi Henderson for her acronym POBA) around a brand, a product, etc. In the Problem Space, when POBAs are articulated, we take them as our cue to nudge people further into their thinking, their feelings, or the “code” they live by to identify what is underneath.
These two differences make an enormous difference in the approach between being a Listener and being an Interviewer.
Difference 1: There is no Guide
In a Listening Session, there is no Interview Guide. The conversation begins with the study scope question, like, “Tell me about how you made sure you looked your best for your reunion…” And it continues from there. In this example, the participant may or may not have used a calorie counter app. They may have used one from a competitor. (Note, they would have been recruited such that they prepared to look their best for their reunion.)
In place of a guide, there is a disciplined ear; the Listener nudges the participant when they hear more surface descriptions. The box to the left summarizes different types of surface descriptions that need nudging to get more depth. The box to the right provides some examples of question stems that will redirect participants to reveal their thinking, reasoning, and emotions.
Difference 2: The Conversation Is Free from Externally Introduced Topics
In many guides, there are questions that we — and our clients — want answered, like reactions and opinions about things participants have not brought up. In a Listening Session, nothing is introduced or queried that hasn’t already been mentioned.
Difference 3: Outputs Are the Starting Point for Innovation and Design
In the Solution space, research outputs are usually an “answer” of sorts: which product package should be produced? Which creative delivered the message most compellingly? Which call to action content resulted in the most conversion?
In the Problem Space, Listening Sessions identify people’s needs as they progress toward a goal or purpose. These needs are the starting point in the organization’s quest to figure out solutions (services, products, experiences) that better support people. Examples include: a website that is more reflective of their needs, a calorie tracking app that corresponds more closely to their purpose of looking good for others.
In other words, while qual in the Solution Space tends to supply the “answers” to problems, qual in the Problem Space tends to supply the “questions” that spur a company to explore one or more directions.
Try a Few Question Stems and See Where it Leads
Integrate a few question stems into your conversation; when a participant say,s “Mostly we go to movies on Wednesday night…”, ask, “How did you figure out that works best for you?” When someone describes a statement of fact — like a describing a scene — ask, “what’s going through your mind in that scenario?”, to get them back to their own thinking and feelings.
For more than 30 years, Kunyi has helped organizations deeply understand the people they wish to serve and assist them in using this understanding to make decisions and move forward with more certainty and less risk.
She is a senior consultant at Mara Consulting, working to help organizations improve service delivery through technology, privacy & security, business consulting, and human centered design.
Linked In: linkedin.com/in/kunyi
| Comments (1)
Posted By Mark Wheeler,
Wednesday, October 2, 2019
Photo by Jo Szczepanska on Unsplash
A book published earlier this year provides a nice toolkit for qualitative researchers and consultants looking for new ways to bring additional value to our work. Super Thinking, by authors Gabriel Weinberg and Lauren McCann, introduces and explains a large number of mental models that can be applied as tools to help us do our research and communicate our findings and recommendations with more depth and impact.
A mental model is essentially a recurring concept that can be used to help understand, explain, and predict things. They are used as shortcuts to higher-level thinking. Most mental models have solid supporting evidence behind them but are not extremely well-known or formally taught to everyone in school.
Because most mental models are intuitive, they can be quickly explained to others, and used to recognize and describe patterns in behavior. They are highly valuable in qualitative research because we continually observe and hear things that need to be communicated to our clients – sometimes in ways that help to give a higher level of explanation than we have heard. It is much easier for us to recognize and explain something if we have a solid label for what is going on.
There are literally hundreds of mental models in the book. They come from a wide and varied number of fields of study, including philosophy, investing, statistics, physics and physical science, and economics. A list of several mental models is included in the accompanying table.
Applying Models in Research
In a recent marketing research project, I found a way to make use of one of Weisberg’s and McCann’s mental models to help communicate a key point to clients during a long day of in-person research. (Note: there will be a lot of detail blinding in this example to ensure confidentiality.) The research was in support of a safer kind of post-surgical wound care that had been on the market for a few years. Some of the doctors in research claimed that they hadn’t noticed fewer post-surgical complications since switching to the safer alternative, and some thought they may have seen even more complications. This was causing (and I am being understated here) some confusion and concern in the back room. Fortunately, the situation brought to mind the mental model of a moral hazard. Put simply, people take on more risk when they have information (in this case, the knowledge about the new wound-care therapy) that encourages them to believe that they are being protected.
Discussion with clients about moral hazard helped us to put a label on what we were hearing and helped us understand and probe differently in later interviews. Even more important, we were ultimately able to use the learnings to generate new messaging about the wound care product to address the potential problem of moral hazard for both physicians and patients.
A lot of the useful mental models in the book come from the social and behavioral sciences. The concept of availability bias describes the fact that once we make an answer (or behavior) available in someone’s mind by drawing attention to it, the answer begins to seem more correct. It is an automatic effect and is nearly impossible to resist. Of course, we usually want to avoid availability bias when we moderate (i.e., no leading questions).
I often discuss this idea of availability bias with clients when writing guides or surveys, and the reaction is overwhelmingly positive – even when it leads to re-writing someone else’s question. Availability bias comes up in other situations, for example when composing messages for promotion. In these cases, the bias can become a bit more acceptable (e.g., “Doctor, tell me about how satisfied your patients have been after you have prescribed our drug?”).
The larger point behind these examples is that introducing clients to mental models such as moral hazard and availability bias helps to communicate relatively complex points in a simple way that wouldn’t be possible without using the terms. When discussing a particular mental model such as loss aversion before research, clients and other listeners then begin to recognize it when they hear it from respondents. It is also fair to think of mental models as “value-adds” for any moderators or consultants who are able to bring in new concepts to help their client achieve their objectives. I’ve found that introducing mental models relatively early in reports can help prepare clients for critical upcoming findings and conclusions.
It is well worth while to check out Super Thinking and discover which mental models can be most valuable to your business.
Mark A. Wheeler, PhD, is a qualitative researcher and consultant who applies his background in cognitive and behavioral science to help his clients achieve their goals. He is Principal of Wheeler Research LLC in Bryn Mawr, Pennsylvania.
| Comments (4)
Posted By Jennifer Dale,
Tuesday, September 17, 2019
Qual or Quant? Choosing the best method for your research study
Quantitative and qualitative research are both scientific methods for data collection and analysis. They can be applied alone, or in combination, to maximize insights.
The Basic Difference: Going Beyond What vs. Why
Quantitative research relies on large sample sizes to collect numerical data that can be mathematically analyzed for statistically significantfindings. Surveys are structured, questions are typically closed-ended, and answer choices are fixed. However, quantitative research may also include a limited number of short-answer open-ended questions to help clarify why people responded the way they did to a closed-ended question. Eye tracking, facial coding, and even Big Data fall under the umbrella of quantitative research, with computers analyzing enormous volumes of data incredibly fast.
Quantitative studies produce numerical data, which allows for statistical analysis and ultimately precise findings. The US Census is a great example of a quantitative research study – fixed and close-ended questions, an enormous sample size, a collective review of many respondents, and measured population segments.
In contrast, qualitative research seeks to understand the reasons behind the numbers, as well as what is not yet known. Sample sizes are smaller, questions are unstructured, and results more subjective. Unlike quantitative research, qualitative studies insert the researcher into the data collection process. The researcher probes responses and participants provide more detail. Qualitative data is collected through interviews, group discussions, diaries, personal observations, and a variety of other creative and ever-expanding means.
Qual studies work with textual and visual data, interpreted and analyzed for directional findings. Qualitative research studies include fluid and open-ended questions, a smaller sample size, an in-depth review of each respondent, and emerging themes.
I like to think of the difference visually, where a quant study collects specific data from a large number of people, and a qual study goes deeper to collect greater insights from a small number of people.
How to Choose
The answer to whether you proceed with quantitative or qualitative research lies in your research objective and available resources.
- Why you’re doing the research
- What you need to know
- Your budget, staff, + schedule
- How the findings will be used
Consider these possible scenarios the next time you’re stuck and don’t know which way to go:
Quant + qual can come together in other ways. A questionnaire with open-ended questions, while ultimately coded numerically, can offer a window into the unknown. Focus groups that also include poll questions or surveys can produce hard data when analyzed in total, even if the results are not statistically significant.
With good planning, quantitative and qualitative research come together like a dance, guiding the marketer’s success with every step.
I Say Hybrid, You Say Multimethod
Combining quantitative and qualitative research approaches is an ancient strategy, but the names continue to change with the times. I did a bit of research and found the following terms being used to describe that ideal combination of quantitative and qualitative research. What term do you use? And why? ;)
Jennifer Dale, President + CEO Inside Heads, is a seasoned marketing professional and pioneer in online market research. Her passion for marketing, human behavior, and technology keep InsideHeads on the short list of research providers for some of the world’s most discriminating clients. Jennifer is co-author of Qual-Online, The Essential Guide and has published a number of articles in VIEWS, Alert! and Quirk’s Marketing Research Review.
| Comments (3)
Posted By Katye Hamilton,
Tuesday, August 6, 2019
Updated: Tuesday, August 6, 2019
Qualitative Research 101 – A Guide to the Basics of Qual
Qualitative Research 101 – A Guide to the Basics of Qual
Are you new to qualitative research or want a refresher on the different styles of group discussions that typically encompass qual research? While the topics you explore in each session will vary widely, there’s a basic group structure to take into consideration before you start building your discussion guide. First, decide if your research objectives need face-to-face (F2F) solutions or if an online approach will work.
For best group dynamics, the ideal total participants is 4-6 people. Any larger and you won’t be able to hear from each participant as often or dive deep into the conversation with everyone involved. The discussion is led by a moderator and you may see an assistant or dual moderator in the room. The moderator(s) lead the group from topic to topic and encourage all to contribute.
Standard focus group rooms have a one-way mirror for clients to observe the session in real time. Photo courtesy of Issues & Answers and their Virginia Beach Facility.
Dyads and Triads
These are groups with only two or three participants, respectively, plus the moderator. Maybe it’s a physician, patient, and caregiver doing an appointment mock-up. Or you want to have a focused discussion with just a few consumers; three pet parents, each with a pet with a specific dietary need. The conversation is likely going to be less exploratory and more focused so you can dive into details quicker. Dyads and Triads are great when there’s a monitoring session, like website navigation or roleplaying situations.
In-Depth Interview (IDI)
A true one-on-one interview involves a moderator + respondent. The power in an IDI usually stems from the research topic at hand. Is it a sensitive subject like health care, death, financial, etc.? Or maybe it’s understanding a person’s journey – purchasing process, behavior understanding, etc. Isolating the respondent helps promote a feeling of safety in the conversation as well as creates an opportunity to explore subjects more deeply.
All three of these session types can be executed in a research facility, off-site with cameras for recording, or online with a focus group vendor. Most clients want to see and hear the conversations in real time, so they watch from what we call the “back room” which may be a physical room at a research facility or off-site, or in a virtual back room with an online provider.
There are some innovative focus group spaces that shake up the traditional, round table/conference room set-up with more relaxed or on-topic scenes. Check out Good Run Research & Recreation; they have a formal living room and bar room models (still with the one-way mirror, complete back room experience for clients) to amplify the respondent and moderator discussions.
These have a lot of names (workshop, co-creation, etc.), but the premise is pretty similar across the board. These are sessions where you bring multiple groups of people into a room together. When you have a client that wants to be highly engaged with the process, and not just an observer, you may want to tap into these models. These could be:
- An internal workshop with employees from multiple departments (stakeholders) and you as the moderator facilitate the group activities and conversation.
- A session where you mingle clients with the respondents for brainstorming, ideation, new product development, etc. Clients would likely be spread out among the respondent tables so they can engage directly as well as learn firsthand their experiences and ideas.
Marketing research ethnographies are never “hands-off.” In the education space, a true ethnography would have little to no engagement with the person or people you are observing; you’re meant to do just that – observe. In marketing research, we believe in the power of observation plus asking questions.
Ethnographies in MR can come in the form of in-home interviews, shop-alongs where you meet a respondent at a specified location and track their buying process, or even on-location research. The purpose is to get the respondent in a natural environment, rather than a traditional focus group setting. This is helpful when you need fewer recall answers and more in-the-moment engagement.
Other F2F Types
The list above is not meant to be exhaustive; in-person intercepts and telephone interviews can be important for your qualitative research, depending on the objectives. Is there another form of F2F that I missed? Tell me about your methodology in the comments!
Online qual solutions have expanded tremendously in terms of vendors, programs, platforms, and the types of research executions available — from desktop applications to mobile phone apps. It’s important to consider online styles when your client may have a limited budget or there’s a tight timeline that limits your travel opportunities.
Methods may vary, from text-based surveys with auto Q&A to mobile apps that track respondents’ phone patterns (the apps they open, websites they checked, etc.). Sometimes it’s important to engage and observe in a respondent’s natural habitat – their mobile device. Maybe you need to geo-ping respondents for a study when they’re near a certain location and you need photo collectors?
Communities vs. Online Bulletin Boards (OLBB)
For some, online communities are virtual hubs for long-term or continual engagement. The online community acts as a “panel” of ready respondents for your ongoing topic.
Shorter engagements are sometimes called communities or online bulletin boards. These could be as short as 2-3 days with a few dozen respondents. There are multiple engagement activities from photo collages, Q&A, group discussions, etc. OLBBs may be less flashy and more of a straight discussion thread. There could be engagement through liking/commenting on others’ posts, but the conversation itself is pretty straightforward.
Semantics aside, this type of online qual is still moderated! Through probing questions, video chats, or private messages, the moderator writes the discussion guide and engages with the respondents in the platform to promote responsiveness, details, and any follow-up questions that may arise. These tend to be a solution for more respondent engagement than in a one-time fixed setting and give respondents flexibility with their dedication since many are mobile enabled.
Since the Fall of 2017, InsightsNow’s Clean Label Enthusiasts™ is an online community which offers ongoing insights into a range of topics, providing a highly flexible research solution for quick answers.
Just like the F2F group types, you can translate that experience into a digital medium. Multiple vendors allow moderators to share their screens, their stimuli, allow for group chat, individual webcams and a client view. Doing online groups in this way helps alleviate any travel pains but does usually require more technology-adept consumers – something to consider if that may change your recruit type.
I’m specifically leaving out the topic of surveys for this blog post! While some surveys can be qualitative in nature, most of the time they still fall into the realm of quant. Qual derives part of its value from the moderated content — something we haven’t been able to solve fully in the survey space.
I hope you either learned something new with this post or gained fresh inspiration for a project you’re working on. Tell us about your qual methods in the comments!
About the Author:
Kayte Hamilton is a hybrid marketing researcher with a passion for solving complex client problems. She’s got a knack for sorting out the details while maintaining project integrity. In her free time (ha!) you will find her spending time with her dog Muffin, traveling the states, or volunteering.
| Comments (6)
Posted By Isabel Aneyba,
Tuesday, June 25, 2019
Updated: Monday, June 24, 2019
Let’s Work Together: The Consumer Co-Creation Camp
While focus groups have long been a part of the innovation process, many clients have voiced their frustration about the limitations of traditional focus groups. To respond to this and other client needs, we created a methodology called Consumer Co-creation Camp. It is designed to expedite the research process while making it fun and provide a more direct connection between the client and consumers.
We had a client that decided it was time for his company to start an innovative process. This is how he requested the research: “I do not want boring focus groups, I want a fun process like a reality show, where we are looking to discover new things. I do not want to listen to top-of-mind responses, I want a deeper understanding. We want to achieve a year’s worth of research in one comprehensive study: understand the target, create product/brand concepts and evaluate those concepts”
To address this client’s broad request, we facilitated three groups simultaneously in three days to create products and brands with consumers. This process involved multiple stakeholders: the client team, the advertising agency and the consumers. We called this engaging process: The Consumer Co-creation Camp.
At the end of the fieldwork, the client stated: “We clearly know what we need to know to make this product a success in the marketplace”. How did this project provide such clarity and confidence to the client team and agency? In my view, it was the co-creation of compelling consumer-ready ideas. Three successive stages lead them to:
We wanted the participants to get to know one another first, so we asked Millennial participants to introduce themselves using a collage they created prior to the Camp. This set the stage that this process was about the Millennials and about being together. They felt appreciated while they found new friends and were free to use their own colloquial language.
During this process, our clients moved from feeling “I want to hear this and that” to “These people are interesting”” to “This is going to be big”. There was a perception shift because it was the first-time clients had a chance to see how these Millennials saw themselves.
Millennials created new concepts after testing the product. Collages helped participants to articulate their feelings because many times participants do not know how to describe their feelings and emotions. Collages were a springboard to show their feelings and it was a great equalizer, giving them all the opportunity to adapt the product and the brand to themselves. Our clients witnessed how the brand concepts matched Millennials’ needs and personal styles.
This stage motivated the clients the most. The Millennials presented their ideas directly to them, in the same room. The client team and Millennial teams had a vigorous conversation. There was ‘one voice in the room’. Consumers and clients worked in tandem focused on the unifying goal, with no barriers, mirrors or attitudes. After the final presentation, all the clients knew what the final output of the research was!
At the end of the process, three key outcomes would significantly impact product management, the brand vision, and consumer engagement.
Product Management. The global R&D and Marketing team became aligned and felt empowered to make necessary product and packaging changes.
Brand Vision. The client and ad agency gained a deeper understanding of Millennials, their needs, and shared this with the entire corporation. This understanding inspired them to create a new brand vision.
Engagement. The marketing teams learned how Millennials made friends, and this insight helped them to better engage with this target – utilizing a relevant marketing platform.
Even after the camp, the participants’ ideas were referred to constantly by the clients and the agency. Their vivid experiences allowed for crisper memories. The co-creation experience anchored the clients’ understanding on this target audience through a human connection. It was clear how the Co-Creation Camp streamlined the research process, and in the end, saved the client money and time while enhancing their understanding.
Do you believe your corporate clients would value working together with the consumers in a fun, engaging process that yields high quality insights and speedier outcomes?
If so, how can you streamline your next research project to generate compelling consumer -ready ideas? Consumer Co-creation Camp is a great alternative. When empowered and enabled by the research process our experience has shown that Millennials and Clients are happy to embrace the challenge of creating new products and services.
Isabel Aneyba is president and chief insight generator of COMARKA, an Austin, Texas research firm. COMARKA empowers marketers to develop meaningful product and brand ideas with their customers through dialogue. www.comarka.com
| Comments (0)