Posted By Ted Kendall,
Tuesday, February 11, 2020
Updated: Tuesday, February 11, 2020
Adapting Your Listening Skills to the Online World
By: Ted Kendall
As a successful qually, you intuitively know the importance of listening, how to listen well, and how to show participants that you are listening.
Listening is important because it engenders trust, creates rapport, and opens participants up.
In a physical setting, the key things we do to listen, and to show we are listening, include:
- Asking questions in response to participant’s thoughts
- Using verbal and non-verbal cues to show how you are listening
- Letting participants complete their own sentences
- Maintaining eye contact
- Acknowledging comments in specific ways like boarding or post-it notes
You will have noticed that most involve physicality—you have to be there in real life.
So, how do you listen, and just as importantly, show you are listening, in online qual?
Before we get into this, let me clarify that when I am talking about online qual in this context, I am referring to text-based online qual—primarily bulletin board style. While webcam interviews may be considered online, real life listening skills can be applied to the medium fairly easily.
Set Expectations to Counter Online Research Misperceptions
A unique challenge with online qual is that participants don’t necessarily know the difference between a survey and a qualitative discussion, so they often treat the study as if it were a survey. And they often believe that any interactions will be with a chatbot, not a real person.
It’s critical to counter these widely held beliefs and set the appropriate expectations up front. Tell participants you are listening to what they will say. And let them know it’s not a survey—it’s a conversation.
I can sometimes be pretty blunt about this—even going so far as to tell participants that if they just speed through the answers to my questions, they will not get the incentive. And then, when someone does that, I follow through on the promise and call them on it. Often it changes their interactions. Sometimes it doesn’t. But they definitely know you are listening. And, if the discussion is open to the whole group, others will see that you are listening as well.
Depending on the platform, you can use the messaging tools as well as the landing pages to accomplish this. And if the tools aren’t there, just use email or text, even phone, outside of the application.
I also make it a habit to reply to every participant post in the introductions—much like I do in a traditional focus group setting, or for that matter, in a conversation with a stranger. These replies can often reflect common ground, ” I love spending the day in the mountains with my dog too. What kind of dog do you have?” That’s not a question that will provide rich insights, but it will help open up the participant and really shows you are listening.
It’s critical to establish early in the conversation that you are a living, breathing, listening human being—not some chatbot or AI ghost in the machine. This has a huge impact on how participants approach your conversation.
Avoid AI Tools
Several online platform providers are touting AI generated responses to participants. All I can say is that this is what we get when we let the programmers drive development. Avoid this feature. Yes, it saves you time during the discussion. But it also removes you from the conversation—you are no longer actively listening. You wouldn’t let a robot take over your focus group session just to save time, would you?
Also, AI is not yet perfect. And it needs to be in this case. It’s not a life or death situation, unless you consider the life or death of the research conversation. Even if the AI gets 90% of the interactions correct, there is that 10% that will suck the air right out of your conversation with that participant. If you are using a group setting, other participants will see the mistake and the negative impact becomes exponential.
So just don’t do it. The potential losses greatly outweigh the potential time savings. Besides, actually responding manually forces you to listen and learn—which is what this is all about. Don’t let a robot take your job.
How to Digitally Use “Non-verbal” Cues and Maintain Eye Contact
In the online, text-based world, you certainly can’t maintain eye contact, nor can you provide non-verbal cues to show you are listening. So how do you employ those key principles of listening in an online, text-based world?
Probably the most obvious way is replying to participants’ posts with questions to better understand what they have said or get some clarification on their comment. Yes, I am talking about the same probing questions we lay on participants in focus groups and interviews. These probing questions work just as well online as they do in real life.
To replace those non-verbal cues, I have found it quite effective to comment or ask questions even when there is no need to do so. The idea is that by just saying something, participants recognize that you are there and you are reading what they are posting—you are listening.
Sometimes it is easy to just copy and paste the same general comment to several participants when you do this. If the participants can’t see one another, this is fine and saves you time. But if the participants can see each other, then it just makes you look like a robot.
It’s important when making comments just to show yourself to not require a reply—often this is an option. I like to just thank people for providing quality detail or thank them for an interesting take on the topic. The important thing is to personalize it a bit, to keep it from sounding generic.
Another way to show you are listening is to use the messaging app within the platform to hold meta conversations outside the actual discussion. I make it a point to send reminders at set times as well as thank-yous at the end of the day of discussion.
These messages don’t have to be just logistical in nature. You can also use them to show you are listening. Sometimes I will include a comment about some of the discussion—an insight that came through for the whole group of participants, or sometimes personalizing it to a specific participant.
In the end, listening is important to successful qual, whether you are in the same room as the participant or interacting digitally. It’s just how you listen, and how you show that you are listening, that can take a little adjustment in the digital qual world. But it’s no less important and no less doable.
Ted Kendall is the founder of TripleScoop, a boutique research agency that has a focus on online qualitative. Ted got to this place in his career by being in the right place at the right time to pioneer in early online methods. He was a co-founder of QualTalk that became 20/20 Research’s QualBoards. He learned how to moderate online qual through trial and error and has moderated hundreds of online qual discussions and interviews since that first one in 1997. And he is usually a good listener.
| Comments (0)
Posted By Foster Winter,
Tuesday, January 28, 2020
Super-qualitative! Using Qual Skills Beyond Market Research
By: Foster Winter
Please be assured, my qualitative colleagues, this subject is not intended to demean the discipline of market research. We love MR. However, over time we found that our qualitative universe was expanding. Now before you delve into astrophysics, we promise to keep our discussion more earthbound.
Whether you are in the early stages of your qualitative career, having mid-career reflections or thinking of winding down your MR-based practice, we’ve found some examples of adjacencies that may serve as thought provoking for you.
The Operating Theatre
One of our colleagues has used her qualitative background to help in a most important aspect of the world of medicine. Many of you will remember Lauren Woodiwiss as an active member of QRCA for many years. As an avocation, Lauren had been involved in community theatre. As she moved into the next phase of her career, Lauren continued to hone her acting skills, becoming a professional actor.
She says that one of her most rewarding roles is that of a patient interacting with medical professionals at all levels, from first-year med students to physicians, nurses, and other medical personnel. Nearly all medical schools now employ patient/health care provider role-playing as a valuable communication and physical exam training technique.
Lauren has found that her qualitative skills, such as reading body language coupled with rapid-fire, in-the-moment, relevant, and ad-libbed response allow her to realistically portray the patient and then provide both written and oral feedback to the learner and to the training institution. This feedback can include direction on what helped her — as a patient — to feel cared for and respected, as well as more concrete feedback of multiple aspects of taking a complete history, asking relevant questions and follow-up probes and correctly executing the physical exam.
The feedback questionnaire can have as many at 40 different elements of evaluation. These must be rated based on the “patient’s” memory of the encounter which just took place and, as mentioned, the evaluation encompasses all aspects of communication from the time the learner enters to the time of exit.
A qualitative researcher has the ability to have many thought “balls” in the air at once, such as:
What is the respondent saying?
Does that answer the question I just asked? If not, is it a point I should explore?
Does it fit the client’s objectives for the research?
How am I doing on time?
It is these skills that exquisitely prepare medical professionals for this job.
Working with Underserved Populations
As a recently retired QRC, Barbara Rugen and her husband joined the Peace Corps and were sent for two years to the African country of Namibia.
“Never once did I think I would be called upon to use my qualitative background. To my surprise, I found that my skills could make a significant difference there.”
Barbara worked largely with the Nama, who constitute the marginalized communities of the south. The first thing she learned about the Nama was the disillusionment of foreign agencies that had tried to help them: “They just don’t care!” was a common complaint. The second realization was the local prejudice against them, particularly by the white Afrikaaners: “I don’t hire Nama. The Nama are too lazy.”
A small number of Nama were in positions of influence who wanted to uplift their people but were unsure how. Barbara conducted IDIs with the leaders and focus groups with the Nama people. The qualitative sessions explored Nama attitudes and behavior, and the research provided insights to help leaders frame recommendations for Nama capacity building and develop an action plan for the capacity building of these marginalized people.
If you are interested in learning more about this adventure in qualitative, you can hear an interview with Barbara on a VIEWS podcast at https://qrcaviews.org/2019/03/11/spring-podcast-using-qualitative-techniques-within-marginalized-populations/
Business Consulting and Talent Recruiting
My journey into the adjacent qualitative universe began with a small strategy project for a company I call a re-startup. The company had reorganized and was now on a growth path. The task at hand was where to start rebuilding the organization.
Enter strategic qualitative. We began with in-person depth interviews with members of the senior management team. From the knowledge gained, we recommended that the first personnel hole that needed to be plugged was that of a MarCom director. The client agreed, and then said, “find us one.”I looked around to see if they were talking to me. But then, I realized that many organizations, particularly those in startup mode, do not – in fact should not – have their key management people getting into the weeds of going through the hiring process.
We did find our client a suitable candidate for that position — if I do say so — she’s been there for nearly 3 years, and is doing a great job with a five-person department reporting to her. And we learned and developed a process that allows the supervisory/management team to do their primary jobs and still bring in the proper new talent.
Now, I admit my bias – and my client concurs with this view – that a primary reason the process works is that the foundation of the search is based on interviews treated as qualitative investigations. The nuances of the conversations also keep an ear on cues to the candidate’s compatibility with the culture of the organization, a very important aspect to a growing company.
While the three examples above illustrate later-career direction shifts, as we noted at the outset, qualitative expertise might offer new trajectories at any point in this rapidly changing research universe. I’d love to hear your thoughts!
Foster Winter is Managing Director of Sigma Research & Management Group. His experience as a business owner and researcher has contributed to his capabilities as a management and organizational consultant. Foster has served on the QRCA Board of Directors, co-chaired the Worldwide Qualitative Conference in Budapest and is the host of the QRCA VIEWS Conversations in Depth podcasts.
| Comments (0)
Posted By Aimée Caffrey ,
Tuesday, December 10, 2019
Updated: Monday, December 9, 2019
Practical Messiness Masked by the Qualitative and Quantitative Distinction
By: Aimée Caffrey
This blog post discusses the practical messiness that can be masked by the qualitative/quantitative distinction and offers an approach for thinking about and dealing with that messiness.
Like many anthropologists, I have an abiding interest in the ways in which people construct and reproduce boundaries. During my doctoral work, my primary focus was on boundaries such as ethnicity, caste, and nationality. The professional path I have taken in more recent years has in part shifted my attention toward boundaries of another variety—the boundaries that demarcate scientific knowledge practices in industry, and toward a particular boundary with which the readers of this blog are already quite familiar—that between quantitative and qualitative. In my present role, I conduct and help support research that by most definitions would count as qualitative. At the same time, this work almost always feeds into, or follows on the heels of, research that by most definitions is quantitative. It might entail using IDIs, focus groups, or journaling exercises to better understand terminology or relevant dimensions of experience prior to writing a survey. At the other end of things, it might entail using these data collection formats in an effort to make sense of survey findings—when we have discovered the what but are uncertain of the why.
Working at this intersection instills a perhaps exaggerated awareness of, and sensitivity to, the risks of accepting the quant/qual boundary at face value. Like others of its type, this distinction is a productive shorthand for organizing and talking about a variety of practices; however, it can mask the messiness of reality. A very experienced industry researcher gestured toward this messiness on a recent L&E webinar when he remarked on the "under-powered quant" that can be at work when focus group moderators ask for a show of hands. Alternatively, consider that many of what are generally marketed as mobile ethnography or online qual tools often contain what we otherwise think of as quantitative question types (e.g., multiple choice). To offer another example, I regularly assist fellow researchers with the development of interview and focus group discussion guides, and often this assistance centers in part on rephrasing "how much" (i.e., quantitative) kinds of questions to help us make sure we are in fact collecting qualitative data.
These examples of the messiness relate to a tension between the method deployed and the data gathered. When we think of the boundary between qualitative and quantitative as pertaining to a (reporting) distinction between numbers and words, the lines are similarly blurred—we discover the use of stories and images to help explain the findings of quantitative analysis and the use of quantitative adjectives to convey insights from qualitative analysis. This isn't terribly surprising: If there is "terror in numbers," as Darrell Huff wrote in How to Lie with Statistics, the tensions and nuances at the very human heart of qualitative data can also induce discomfort. But, just as the pictures (e.g., graphs) we draw to quell the disquietude of quant can exaggerate the story that the numbers tell, so too can the words we use to describe our qualitative findings be misleading. What is more important than policing the qualitative/quantitative boundary? It is being watchful for what the messiness around that boundary might signal—that there is a misalignment somewhere among the objectives in mind, the method deployed, the data gathered, and ultimately, the claims that are made.
There may be justifiable and even good reasons to ask for a show of hands in a focus group—for example, as a quick "pulse check", or to help warm up participants at the start of the discussion. But whether we think of our work as quant or qual—and whether we are thinking of our questions, our methods, or our claims in making that determination—let's be deliberate and mindful about the implications of actively inviting that messiness into the picture.
Aimée Caffrey is a cultural anthropologist and UX researcher. Since 2017, she has worked in the Advanced Analytics Group at Bain & Company, where she collaborates with consultants, developers, designers, and fellow researchers to help clients solve some of today’s most exciting business challenges. If you wish to get in touch, please email her at Aimee.Caffrey@Bain.com.
| Comments (4)
Posted By Mark Wheeler,
Wednesday, October 2, 2019
Photo by Jo Szczepanska on Unsplash
A book published earlier this year provides a nice toolkit for qualitative researchers and consultants looking for new ways to bring additional value to our work. Super Thinking, by authors Gabriel Weinberg and Lauren McCann, introduces and explains a large number of mental models that can be applied as tools to help us do our research and communicate our findings and recommendations with more depth and impact.
A mental model is essentially a recurring concept that can be used to help understand, explain, and predict things. They are used as shortcuts to higher-level thinking. Most mental models have solid supporting evidence behind them but are not extremely well-known or formally taught to everyone in school.
Because most mental models are intuitive, they can be quickly explained to others, and used to recognize and describe patterns in behavior. They are highly valuable in qualitative research because we continually observe and hear things that need to be communicated to our clients – sometimes in ways that help to give a higher level of explanation than we have heard. It is much easier for us to recognize and explain something if we have a solid label for what is going on.
There are literally hundreds of mental models in the book. They come from a wide and varied number of fields of study, including philosophy, investing, statistics, physics and physical science, and economics. A list of several mental models is included in the accompanying table.
Applying Models in Research
In a recent marketing research project, I found a way to make use of one of Weisberg’s and McCann’s mental models to help communicate a key point to clients during a long day of in-person research. (Note: there will be a lot of detail blinding in this example to ensure confidentiality.) The research was in support of a safer kind of post-surgical wound care that had been on the market for a few years. Some of the doctors in research claimed that they hadn’t noticed fewer post-surgical complications since switching to the safer alternative, and some thought they may have seen even more complications. This was causing (and I am being understated here) some confusion and concern in the back room. Fortunately, the situation brought to mind the mental model of a moral hazard. Put simply, people take on more risk when they have information (in this case, the knowledge about the new wound-care therapy) that encourages them to believe that they are being protected.
Discussion with clients about moral hazard helped us to put a label on what we were hearing and helped us understand and probe differently in later interviews. Even more important, we were ultimately able to use the learnings to generate new messaging about the wound care product to address the potential problem of moral hazard for both physicians and patients.
A lot of the useful mental models in the book come from the social and behavioral sciences. The concept of availability bias describes the fact that once we make an answer (or behavior) available in someone’s mind by drawing attention to it, the answer begins to seem more correct. It is an automatic effect and is nearly impossible to resist. Of course, we usually want to avoid availability bias when we moderate (i.e., no leading questions).
I often discuss this idea of availability bias with clients when writing guides or surveys, and the reaction is overwhelmingly positive – even when it leads to re-writing someone else’s question. Availability bias comes up in other situations, for example when composing messages for promotion. In these cases, the bias can become a bit more acceptable (e.g., “Doctor, tell me about how satisfied your patients have been after you have prescribed our drug?”).
The larger point behind these examples is that introducing clients to mental models such as moral hazard and availability bias helps to communicate relatively complex points in a simple way that wouldn’t be possible without using the terms. When discussing a particular mental model such as loss aversion before research, clients and other listeners then begin to recognize it when they hear it from respondents. It is also fair to think of mental models as “value-adds” for any moderators or consultants who are able to bring in new concepts to help their client achieve their objectives. I’ve found that introducing mental models relatively early in reports can help prepare clients for critical upcoming findings and conclusions.
It is well worth while to check out Super Thinking and discover which mental models can be most valuable to your business.
Mark A. Wheeler, PhD, is a qualitative researcher and consultant who applies his background in cognitive and behavioral science to help his clients achieve their goals. He is Principal of Wheeler Research LLC in Bryn Mawr, Pennsylvania.
| Comments (4)
Posted By Jennifer Dale,
Tuesday, September 17, 2019
Qual or Quant? Choosing the best method for your research study
Quantitative and qualitative research are both scientific methods for data collection and analysis. They can be applied alone, or in combination, to maximize insights.
The Basic Difference: Going Beyond What vs. Why
Quantitative research relies on large sample sizes to collect numerical data that can be mathematically analyzed for statistically significantfindings. Surveys are structured, questions are typically closed-ended, and answer choices are fixed. However, quantitative research may also include a limited number of short-answer open-ended questions to help clarify why people responded the way they did to a closed-ended question. Eye tracking, facial coding, and even Big Data fall under the umbrella of quantitative research, with computers analyzing enormous volumes of data incredibly fast.
Quantitative studies produce numerical data, which allows for statistical analysis and ultimately precise findings. The US Census is a great example of a quantitative research study – fixed and close-ended questions, an enormous sample size, a collective review of many respondents, and measured population segments.
In contrast, qualitative research seeks to understand the reasons behind the numbers, as well as what is not yet known. Sample sizes are smaller, questions are unstructured, and results more subjective. Unlike quantitative research, qualitative studies insert the researcher into the data collection process. The researcher probes responses and participants provide more detail. Qualitative data is collected through interviews, group discussions, diaries, personal observations, and a variety of other creative and ever-expanding means.
Qual studies work with textual and visual data, interpreted and analyzed for directional findings. Qualitative research studies include fluid and open-ended questions, a smaller sample size, an in-depth review of each respondent, and emerging themes.
I like to think of the difference visually, where a quant study collects specific data from a large number of people, and a qual study goes deeper to collect greater insights from a small number of people.
How to Choose
The answer to whether you proceed with quantitative or qualitative research lies in your research objective and available resources.
- Why you’re doing the research
- What you need to know
- Your budget, staff, + schedule
- How the findings will be used
Consider these possible scenarios the next time you’re stuck and don’t know which way to go:
Quant + qual can come together in other ways. A questionnaire with open-ended questions, while ultimately coded numerically, can offer a window into the unknown. Focus groups that also include poll questions or surveys can produce hard data when analyzed in total, even if the results are not statistically significant.
With good planning, quantitative and qualitative research come together like a dance, guiding the marketer’s success with every step.
I Say Hybrid, You Say Multimethod
Combining quantitative and qualitative research approaches is an ancient strategy, but the names continue to change with the times. I did a bit of research and found the following terms being used to describe that ideal combination of quantitative and qualitative research. What term do you use? And why? ;)
Jennifer Dale, President + CEO Inside Heads, is a seasoned marketing professional and pioneer in online market research. Her passion for marketing, human behavior, and technology keep InsideHeads on the short list of research providers for some of the world’s most discriminating clients. Jennifer is co-author of Qual-Online, The Essential Guide and has published a number of articles in VIEWS, Alert! and Quirk’s Marketing Research Review.
| Comments (3)
Posted By Bruce Peoples,
Tuesday, August 20, 2019
Updated: Tuesday, August 13, 2019
Moderating vs. Facilitating: What’s the Difference? Can You Do Both?
A few events in my career journey triggered the exploration of facilitating as a business opportunity and value-add for my clients. First was a breakout session at a QRCA conference where “facilitating” — vs. “moderating” — was brought to my attention. The second was when a client called me on a Sunday to replace his facilitator on Monday, to facilitate a session with R&D, sales, and marketing. With so little time to prepare, I trusted my instincts and experience as a moderator and let it fly… and suddenly I was a facilitator! It was a productive session — the client called me back for another session, and later to do consumer focus groups — and my curiosity was piqued to learn more.
Core Elements: The Same
I attended a three-day training session in facilitation (much like those offered at RIVA or Burke) and was pleasantly surprised to confirm that these two disciplines have much in common. The core elements of moderating and facilitating are the same: there is a gathering of people with something in common; there is a purpose behind the meeting; and there are desired outcomes. You guide the discussion in a thoughtful manner by providing structure and process.
Perhaps the biggest commonality is putting together an agenda — what we call a discussion guide — based on the client’s objectives and desired deliverables. Like a focus group discussion guide, much attention must be paid to the flow of the meeting and to the activities that will generate robust discussions.
Many of the exercises you utilize for qualitative can also be used for facilitating meetings. These might include:
- Developing lists and gathering, sorting, and ranking ideas
- Breaking out in small groups
- Mind mapping
- Perceptual mapping
Facilitated meetings are usually longer – a half day – and therefore benefit from energizing exercises interspersed throughout the session.
The Differences: Participants, Output, and Achieving Consensus
One key difference between moderating and facilitating is the participants. Focus group participants have no skin in the outcome; you will never see them again. Facilitated participants have to work with each other. A facilitated meeting may have colleagues from different functions (R&D, marketing, sales) and at different levels of authority (managers to vice presidents). When the meeting is over, they have to work together to achieve common goals. Their strategies and tactics – their jobs – might be affected by the outcome. I ask to conduct a few brief interviews of participants from different areas prior to the meeting to get a feel for the situation, personalities, motives, and issues that might arise.
Another difference is the output: in a focus group, the outputs are insights and determining their implications and developing recommendations. In a facilitated work group or work team meeting, the output is often an action plan that determines what, who, how much, and when. The action plan should be understood and agreed upon by all participants. In other words: achieve consensus, which means participants can live with the decisions that created the action plan and support them.
Two issues you’ll address more often when facilitating a work team than moderating consumers are resolving conflicts and achieving consensus. Marketing wants that new ice cream now; manufacturing can’t make it until next year. Your approach is somewhat intuitive and not much different than if moderating – but requires more attention and care. Things you’ll need to spend more time on include clarifying the issue, understanding its root causes, ensuring everyone understands the issue, brainstorming pros and cons, and ultimately utilizing techniques to rank or prioritize.
Projects and Meetings Where Facilitators Add Value
Facilitators can add value to a lot of different projects and meetings, but common types are:
- Innovation: Brainstorming to create new product ideas.
- Process improvement: This might include flowcharting a process, identifying roadblocks, and developing solutions to clear those roadblocks.
- Strategic planning: This might include a market situation assessment, SWOT analysis, and developing the outlines of a new strategic plan.
- Data Analysis: Sharing, analyzing, and assessing a lot of data from a variety of sources.
- Planning and executing a new product launch: The output is often an action plan.
Implications for You
If you are a good moderator, should you seek out facilitation opportunities? Yes! You already have many of the skills, resources, and experiences to successfully facilitate work group meetings. To pump up your confidence before jumping in, seek and find some formal facilitation training. On your first projects, get a partner to help plan and execute.
Put together a one- or two-page brochure (PDF is fine) highlighting your capabilities – and this can include moderating. Then network your way to generate awareness. Many of you work with research managers at large companies. Let them know and ask them to share your capabilities with their colleagues in other functions, such as marketing, sales, R&D, or HR.
About the Author:
Prior to becoming a qualitative consultant, Bruce Peoples worked in brand management, channel, and customer marketing for several well-known brands in different industries including Hanes and Jack Daniel’s. Bruce has been a QRCA member for about a decade now and utilizes a variety of methods to help his clients solve their marketing problems, whether they be consumer or business-to-business related. Bruce was trained in moderating at RIVA and in facilitation at Leadership Strategies.
| Comments (0)
Posted By Laurie Tema-Lyn,
Tuesday, March 19, 2019
Annual Conference Reporter on the Scene: Step Back to Move Forward: Developing Customer Journey Maps
Bring the POWER of Theater Games to Your Next Session!
Let me start off by saying I am not an actor, although I’ve had some theater training. I earn my living as a researcher, consultant and innovation catalyst, and I’ve been doing that for decades.
I like to bring PLAY into my work as I see the results are well worth it in terms of ramping up the energy of a flagging team, developing empathy, encouraging candid, uncensored conversations and triggering or evaluating new ideas.
Using theater games builds on fundamentals that all face-to-face researchers/facilitators should have in their arsenal. They include:
- The ability to build rapport and have fun;
- Creating a “safe place” so people feel comfortable expressing themselves;
- Being able to read your group through attentive listening and observation;
- Being willing to take a risk, knowing that there are no failures — risks lead to opportunities.
Here are tips and techniques to add to your repertoire:
- Start with an easy game; I call this one Word Salad. It’s a new twist on the tried-and-true technique of Mind Mapping by adding a pulse — a finger snap — as you capture each participant’s words on a flip chart pad. Breathe and repeat each word or phrase that you are given as you chart. It can be a bit hypnotic. Participants stop self-censoring and by pausing a moment as you repeat the words they listen, reflect and connect. A variation is to use a Nerf ball and throw it to participants to respond. Less time for “thinking,” just gut level responses.
- Experiment with Improvs to illuminate brand perceptions, product or service use, or to inform creative strategy or positioning. It’s good to do a bit of pre-planning to identify some people, places, things or situations that you might want to see “acted out” in your work session. Position the exercise as an experiment.Ask for volunteers and give basic improv guidelines including the use of “Yes, And…” to accept or build on their partner’s offers. Remind participants that you are not looking to them to be funny or clever, just authentic to the character or situations. After you conduct a couple of improvs, it’s important to review what all have learned.
- Theater of Exaggeration. Try this out to spice up a concept review. You might begin in your typical fashion and then encourage participants to push the boundaries. What are the Most Outrageous Plusses or Benefits to this concept? Conversely, what are the Most Outrageous Negatives to this idea? You just might end up with some new ideas or identify problems that participants had been too polite to suggest earlier.
- Mouthfeel: Try this out to help evaluate a name and positioning. This is an improv where participants stand up and have a conversation using a new name or positioning. I recently ran a naming session with a colleague for a social services agency. We had six names in the top tier and were trying to evaluate which were the best. One of the name candidates looked great on paper, but when I asked for two volunteers to improv it (one in the role of a crisis hotline operator, the other a client calling for help) we realized it was a bear… too cumbersome to speak when used in context. We nixed that one from the list.
- Spontaneity based on solid preparation. These games work when you mentally prepare yourselfas facilitator, prepare your respondent team by providing clear guidelines of what you are asking them to do, and prepare your client team in advance so that they won’t be shocked or worried if you include a theater game to your discussion guide or agenda.
These are just a small sampling of theater games and activities you might bring to your next gig. I encourage you to try them out and make up your own, and feel free to get in contact with me.
Links to more articles on this topic:
Practical Imagination Enterprises
| Comments (2)
Posted By Kendall Nash,
Tuesday, March 5, 2019
Updated: Wednesday, February 27, 2019
Practical and thoughtful, but a walking contradiction. She made it clear that every decision she made had a purpose, and every item she bought met well-defined criteria. As she described her grocery store trips, she recalled the price associated with each and every item. In order to even make it into her cart, the items on her shopping list had to fall within an acceptable and narrow margin. And yet, her eyes lit up and you could see her lost in her memories as she described the unique metal bracelet on her wrist that she had bought on a whim for 250 euro during a trip to Barcelona. She smiled again and told me about how it was made.
Scratching Our Heads
That moment when the consumer tells you something totally incongruent with the story you’ve crafted in your mind of who they are and how they live…
Those comments that seem to contradict each other within a span of minutes…
We formulate clear pictures in our own minds of who a person is and what matters to them, only for them to turn around and tell us something that leaves us scratching our heads.
In my early years as a Qualitative Researcher, I’d find myself frustrated. Seeking patterns and convergence of themes, I was always challenged when things didn’t line up. Sure, I understood things would vary from person to person, but I was caught off guard and perplexed by the number of things that didn’t add up within the perspective of one individual.
Humans Are Messy
Of course, it didn’t take me long to realize what many before me had contemplated – that humans are, in fact, messy. We don’t follow a logical path down the road. There’s not always a reason – or at least not a consistent, or “good”, one. We don’t always make linear decisions. Sometimes we struggle with opposing internal forces that shape our mindsets and behaviors.
But then something beautiful happened.
When I looked more closely at those incongruencies within a single person, there were valuable opportunities for my client to step in and meet the consumer in the midst of the messiness. We identified opportunities for innovative products and delivery, discovered more meaningful ways to connect with those not yet using their brand, and found unique ways to give someone a great customer experience worth talking about. It was actually in those messy places we were finding our most disruptive learning – you know, the insights that make your team say “whoa, yes.” It’s exhilarating to experience those moments when you are onto something that you know will significantly and positively impact your business.
Unveiling the Mess with Qualitative Research
As a fan of both quantitative and qualitative research, I respect the ways both serve in delivering the information we need to make good decisions. Yes, enough people will tell you that quantitative tells you the what and qualitative tells you the why, but it’s so much more for me. Quantitative offers us sound decisions, confidence in direction before we set sail, and a big, delicious slice of the world. The beauty of qualitative is our ability to get in the nooks and crannies. To discover the mess and bring things into the light that just might unlock something truly magical for the brand. The rapport we build with consumers allows us a richer glimpse into what matters to them, so we can become brands that matter to them.
Embrace the Mess
Knowing that the messiness of the human heart and mind can be where the greatest potential lies for brands, we can see those moments through an entirely different lens. The next time in research you find yourself with a consumer who doesn’t seem to fit into a perfectly shaped box in your mind, celebrate! When things don’t add up exactly the way you expect them to, celebrate! You are probably onto something really good. And we go after good things.
What about you? Where have you found gold in the messiness of incongruent, inconsistent, yet beautiful human beings?
Kendall Nash is a Vice President at Burke, Inc. in Cincinnati, Ohio. She is an instructor for the Burke Institute and a past president of QRCA. Kendall’s curiosity drives her closer to consumers and their experiences. Her thrills come from uncovering what people truly want and need, and translating that so brands can win.
| Comments (3)