Posted By Isabel Aneyba,
Tuesday, June 25, 2019
Updated: Monday, June 24, 2019
Let’s Work Together: The Consumer Co-Creation Camp
While focus groups have long been a part of the innovation process, many clients have voiced their frustration about the limitations of traditional focus groups. To respond to this and other client needs, we created a methodology called Consumer Co-creation Camp. It is designed to expedite the research process while making it fun and provide a more direct connection between the client and consumers.
We had a client that decided it was time for his company to start an innovative process. This is how he requested the research: “I do not want boring focus groups, I want a fun process like a reality show, where we are looking to discover new things. I do not want to listen to top-of-mind responses, I want a deeper understanding. We want to achieve a year’s worth of research in one comprehensive study: understand the target, create product/brand concepts and evaluate those concepts”
To address this client’s broad request, we facilitated three groups simultaneously in three days to create products and brands with consumers. This process involved multiple stakeholders: the client team, the advertising agency and the consumers. We called this engaging process: The Consumer Co-creation Camp.
At the end of the fieldwork, the client stated: “We clearly know what we need to know to make this product a success in the marketplace”. How did this project provide such clarity and confidence to the client team and agency? In my view, it was the co-creation of compelling consumer-ready ideas. Three successive stages lead them to:
We wanted the participants to get to know one another first, so we asked Millennial participants to introduce themselves using a collage they created prior to the Camp. This set the stage that this process was about the Millennials and about being together. They felt appreciated while they found new friends and were free to use their own colloquial language.
During this process, our clients moved from feeling“I want to hear this and that” to “These people are interesting”” to “This is going to be big”. There was a perception shift because it was the first-time clients had a chance to see how these Millennials saw themselves.
Millennials created new concepts after testing the product. Collages helped participants to articulate their feelings because many times participants do not know how to describe their feelings and emotions. Collages were a springboard to show their feelings and it was a great equalizer, giving them all the opportunity to adapt the product and the brand to themselves. Our clients witnessed how the brand concepts matched Millennials’ needs and personal styles.
This stage motivated the clients the most. The Millennials presented their ideas directly to them, in the same room. The client team and Millennial teams had a vigorous conversation. There was ‘one voice in the room’. Consumers and clients worked in tandem focused on the unifying goal, with no barriers, mirrors or attitudes. After the final presentation, all the clients knew what the final output of the research was!
At the end of the process, three key outcomes would significantly impact product management, the brand vision, and consumer engagement.
Product Management. The global R&D and Marketing team became aligned and felt empowered to make necessary product and packaging changes.
Brand Vision. The client and ad agency gained a deeper understanding of Millennials, their needs, and shared this with the entire corporation. This understanding inspired them to create a new brand vision.
Engagement. The marketing teams learned how Millennials made friends, and this insight helped them to better engage with this target – utilizing a relevant marketing platform.
Even after the camp, the participants’ ideas were referred to constantly by the clients and the agency. Their vivid experiences allowed for crisper memories. The co-creation experience anchored the clients’ understanding on this target audience through a human connection. It was clear how the Co-Creation Camp streamlined the research process, and in the end, saved the client money and time while enhancing their understanding.
Do you believe your corporate clients would value working together with the consumers in a fun, engaging process that yields high quality insights and speedier outcomes?
If so, how can you streamline your next research project to generate compelling consumer -ready ideas? Consumer Co-creation Camp is a great alternative. When empowered and enabled by the research process our experience has shown that Millennials and Clients are happy to embrace the challenge of creating new products and services.
Isabel Aneyba is president and chief insight generator of COMARKA, an Austin, Texas research firm. COMARKA empowers marketers to develop meaningful product and brand ideas with their customers through dialogue. www.comarka.com
Posted By Janet Standen,
Tuesday, June 11, 2019
Updated: Friday, June 7, 2019
Qualitative Research with Employees – Why You Should Be Doing it
If you’re an experienced qualitative researcher and you haven’t done any employee experience research yet, perhaps it’s time you did!
First Things First
Why do I find employee qualitative so rewarding? We spend about a quarter of our life working, so if I can unearth an insight or two that can really make a difference in bettering the experience employees have in the workplace, I’m making a real contribution. By elevating the happiness of people on the job – where they spend so much time – my job seems more worthwhile and – in turn – makes me happier!
When and Why Is Qual Needed?
As is often the case, qualitative is a comfortable (and critical) companion to quantitative research. Most medium to large companies have a comprehensive annual Employee Engagement or Employee Satisfaction Survey – and each year they review the resulting data. Survey questions vary depending on the company, but they are usually around enjoyment, pride, understanding of and fit with the company vision, diversity, management performance, rewards, work/life balance, career development, and so on.
But what happens when the data show a shift or a trend, and no one is quite sure why it’s happening? Or, there’s an unexplained exodus of people; there may be gossip and rumors but no firsthand, deeper understanding about its cause. Exit interviews may or may not get to answers; by this point, the individuals leaving often don’t care enough to help make a difference for the people left behind.
And, what about all those subtle nuances of day-to-day life at work that don’t actually get captured in the big bucket questions? In order to have one standard survey that can be applied across all roles and levels in a company, the survey questions can become so bland – and we all know how dangerous it can be to include honest, open-ended answers if we think it might be possible to track it back to us. This is why qualitative is needed.
Different Roles for Qualitative
To ensure a quantitative survey will provide greatest value, its design is as critical as understanding its output. Qualitative should be used to inform the right questions included in the survey. Qualitative can be invaluable in ensuring the questions are being asked in the right way. Following the survey results, a third use of qualitative is to help explain the reasons for any negative shifts in data, ideally before employees start walking out the door.
Steering Qual. This should come first to ensure that the right questions are asked. Mini-groups are a good way to include a greater number of employees than IDIs, even if it requires some travel. Participants can be cross-functional and cross-level, at least to some degree. The conversations should be informal around key areas that matter to people. The discussion guide should be loosely designed with input from key stakeholders like Directors of Human Resources or Insights, but the moderator should be guided most by the natural direction that each group’s discussion takes. Specifically: what impacts their working environment and success every day; what really matters to people, how much it matters, why it matters. A comprehensive list of topics that impact employee happiness needs to emerge with a good understanding of the right way to think about the topic. Only then should an analysis of the output and learnings translate into a draft survey.
Tune-up Qual. This should come second. Once you have your draft survey, conduct a series of UX IDIs – where respondents think out loud as they take the survey. They need to take place with a range of people in different roles and levels. The survey can iterate throughout the interviews—but once you think it has evolved and been polished to a state of readiness for primetime, do a final 6-8 interviews to ensure it is as good as it needs to be, i.e. has comprehensive and relevant questions for all, that are asked in a clear, easy-to-understand way.
Directions Research. In this next stage, the learning becomes actionable given the benefit of a deeper understanding of the reasons behind shifts in data or behavior. IDIs or mini-groups can be considered, but ideally it’s a combination of the two. The topics up for discussion are driven by the data from the quant. Usually 5-6 topics can be covered in a 90-minute mini-group or a 30-minute IDI.
A great structure once the topics have been carefully introduced is:
What’s working well?
What’s not working so well?
What fears do you have?
What do you wish for?
There is a huge benefit in the research being recruited by an independent recruiter to allow for anonymity and to avoid manager bias of “favorites” being put forward. When moderated by an independent moderator, openness and honesty from participants is encouraged. Employees should be guaranteed anonymity during these sessions, and ideally, a note taker should scribe the session rather than recording it. Reassure individuals that their opinions and experiences will feed into an overview report of themes and that nothing will be attributed to an individual.
Suggestions for change resulting from the research can be grouped into “quick wins” ensuring employees experience the impact of their input, and “longer term challenges” to make sure some of the deeper challenges can be prioritized and tackled by management.
If you haven’t brought your qualitative skills to bear on employee happiness yet, you may want to consider adding it to your “to do” list for the future. You won’t regret it!
Janet Standen is Co-Founder of Scoot Insights, a qualitative provider specializing in helping decision-makers choose a better direction, effectively and efficiently. Her background is in innovation, business strategy and brand positioning.
Posted By Jeff Walkowski,
Tuesday, May 28, 2019
Updated: Friday, May 24, 2019
How To Create Effective Screeners
Whether you’re experienced or just breaking into qualitative research, it never hurts to review what makes a screener effective in finding just the right people for a research project. It is a questionnaire that recruiters will use to find qualified participants for the study. It is called a “screener” because it is like panning for gold—we have to sift through many people to find the nuggets (qualified people) to be invited to participate. Screeners are used by telephone recruiters, or they may be online surveys as a way to automate the recruitment process. Automation helps reduce expense by eliminating the human effort of dialing phones and talking to potential participants. Keep in mind that automated screeners still have costs associated with them – most notably programming costs which may include quota control, skip patterns, and conditional questions (all of which are typical of any online survey).
All the rules/guidelines about questionnaire construction apply to qualitative research screeners. The most effective screeners have the following characteristics:
They Are Short
If a screener is too long, participants may hang up the phone with a recruiter or simply decide to discontinue completing an online survey. Ideally, screeners have no more than 10-15 questions, or they take no longer than 5 minutes to administer (online or offline).
They Are Clear about the Purpose at the Beginning
Tell participants that it is not a sales call. Explain that we are looking for people to participate in a market research interview, but we must spend some time asking some questions to determine if they qualify.
They Do Not Provide Hints that Encourage Cheating
They include an intentionally general description of the nature of the research so as to not tip off participants to answer a particular way so that they can be invited. For example, say, “We are putting together a focus group on beverages,” instead of “We are putting together a focus group to determine what consumers think of Starbucks.”
They Include Questions Up Front that Are Easy to Answer and that Quickly Eliminate People Without Taking too Much Time
For example, if we are looking for millennial females, we will first ask about gender and age so that non-millennial males are quickly excused.
They Include Need-to-Know Questions – Not Nice-to-Know Questions
Asking nice-to-know questions lengthens the screener, can be frustrating to potential participants going through the screening process, and makes the recruitment process less efficient and possibly more expensive. Keeping the focus on questions that help determine whether a person should be invited or not is best.
They Include Intriguing Questions
Interesting questions keep survey-takers engaged. The objective is to not lose them along the way due to boredom.
They Feature Mostly Closed-End Questions
Again, this is designed to help the prospective recruit move through the process as quickly as possible. Closed-end responses also make the task easier for the recruiter (no judgment required).
They Often Include One or More of the Following Question Types
Product/service category use
If they are not users of a particular product or service, they are unlikely to be useful.
Brand(s) used more often and/or brands they would never use
If the project is about a particular brand, we probably do not want individuals who reject the brand outright (unless, of course, the purpose is to attract those who reject the brand).
Past participation in market research surveys, focus groups, and interviews
Preference is given to those who are not considered “professional” participants, so that they approach the research experience with a fresh attitude.
Employment in certain industries
We typically do not want those who are employed in advertising, public relations, or market research. In addition, we tend to rule out those who are employed in the industry that the project is about, because they may “know too much” and not represent the typical customer for the product/service.
They May Include an “Articulation” Question
Such open-end questions are used to help ensure that a participant will be able to make a meaningful contribution to the discussion. Sometimes questions are asked that pose a creativity challenge to the potential participant (e.g., “List 10 ways in which rubber bands might be used”). Ideally, however, a question that is related to the product category will be more relevant (e.g., in a study of high-end golfing equipment, potential participants might be asked to demonstrate some core knowledge of current equipment). In markets where participants may have differing levels of proficiency with the language to be used in the group (e.g., English), the recruiter may be asked to judge the ability of the potential participant to be clearly understood. This serves as an additional articulation assessment.
Jeff Walkowski is the principal of QualCore.com Inc., a consulting firm providing traditional and online qualitative research services to a wide range of industries including health care, financial services, automotive, and information services. He was schooled as a quantitative specialist and entered the industry in the 1980s as a statistician. He later discovered his talents as a moderator and evolved into a qualitative specialist by the mid-1990s.
Posted By Maria Virobik,
Tuesday, May 14, 2019
Updated: Monday, May 13, 2019
Data Visualization: 3 Ways to Make Your Qualitative Reports Pop
What Can Data Visualization Do for Us?
Data visualization—the graphical representation of information and data—can be a powerful tool in qualitative reporting. While we certainly can’t completely escape text-centric pages in our qualitative reports, graphics add visual interest and help break up the monotony of pages (or slides) of text. Done well, graphics help support qualitative findings and enable us to communicate in more interesting ways beyond words on paper (or a screen). Effective data visualization can also help readers understand concepts more quickly and easily and make information more memorable.
All the Cool Kids Are Doing it
Newspapers and other media outlets have jumped on board the data visualization bandwagon. Publications like The Washington Post, The New York Times and the Los Angeles Times employ full-time data journalists to augment their reporting. These folks take an enormous trove of data on a particular topic—for instance, the earlier start of spring in some parts of the U.S. or the confirmed U.S. measles cases by county in 2019 —and expertly slice, dice and manipulate the information into interactive graphics that communicate big ideas in an accessible and elegant way.
Data Visualization and Qual: Not a Linear Journey
Visualizing quantitative data is relatively easy. Hard numbers and percentages naturally lend themselves to visual representation. Charts, graphs and their modern equivalent—infographics—are easy to create from quant data.
Qualitative data can be harder to visualize; transforming qual data into graphics isn't as straightforward or simple. A search for “infographics and qualitative data” reveals that some people even argue that qual data can’t be turned into infographics. Take heart, however. An equal number argue that it can and provide examples to back this assertion.
But it’s not a linear journey from qualitative data to data visualization. Many of us have heard from end clients who want hard numbers or percentages included in a final report to quantify how various concepts or ideas stacked up against each other. We can explain that “qual isn't quant” until the cows come home, but clients persist in making such requests.
Instead of giving in to these requests (or refusing them outright), there is another option. We can take this as the opportunity to develop data visualization approaches that give our clients the detail they want and expect without compromising the qualitative nature of the report. A few examples follow.
Word Clouds – an Oldie but Goodie
Word clouds are a common data visualization technique in qualitative reports. Using font size (and often color), they convey magnitude of various responses, thoughts or ideas. Larger words=more popular/frequent/common. This approach works well because it’s a way to provide granular detail without showing the actual numbers behind the information.
While word clouds aren't the answer for every situation, they are a great tool, and websites for creating them abound. The PollEverywhere blog lists nine favorite word cloud generators, including Wordle and Tagxedo. A Google search for “word cloud generator” will point you to others.
Customer Journey Maps:Timelines in Disguise
Customer journey maps are another way to employ data visualization in qualitative reports. These maps are essentially timelines; a quick Google search on this term turns up many great examples that can be easily adapted to fit your particular purpose.
Here’s one example: a timeline detailing milestones in the 21st Century Conservation Service Corps history from 2010 to 2014.
The example above is organized by year, but the general format can be adapted to visualize a customer journey. Year markers become phases in the purchase journey: research, comparison, selection, purchase. The linear format allows room above and below the line for details on the individual steps consumers undertake in each phase.
Venngage is one great resource for infographic templates and tools, including many for timelines (such as the one below). They offer a couple different subscription plans. But you can peruse the templates for free, and that might be all the inspiration you need to create your own.
Bubble graphs are another idea we can borrow from data journalism. During the 2012 London Olympics, The New York Times kept a running medal count by country and visualized the data in a simple table (below). The information is clear, but the table doesn't do a great job conveying the magnitude of differences among countries.
The Times formatted the same information into a bubble graph. This approach does a much better job conveying magnitude. You can easily identify the countries that led the medal counts. Readers could hover over any circle for more detailed information, including a country’s medal count by type (gold/silver/bronze). (Visit the link below the graphic and try it for yourself!)
The same idea—sans numbers, of course—could be employed in qualitative reporting. For example, we could use a bubble graph to report the characteristics that participants want in a dog.
Readers can immediately see which characteristics were most important and which were mentioned by fewer participants. By keeping numbers out of it, the graphic remains faithful to the spirit of qualitative research.
Sky’s the Limit
These are just a few examples of how data visualization techniques can be employed to make qualitative reports more engaging and communicate findings and implications more effectively.
Here are several links to more examples; many additional resources can be found by searching data visualization:
What are your go-to data visualization techniques and tools? What works? What doesn't? If you have advice or a favorite resource to share, please leave a comment.
Maria Virobik joined QRCA in 2018 but has worked in qualitative research since 1997. After early dalliances in the advertising world, she came to her senses and has been devoted to qualitative analysis and reporting ever since. Originally from Southern California, she and her husband sold their house last year and now live a nomadic lifestyle with their two marginally obedient dogs, Lucy and Ginger Snap.
Posted By Lisa Horwich,
Tuesday, April 30, 2019
Updated: Tuesday, April 30, 2019
Why Quallies Should Care about Marketing Technology (MarTech)
The “Rise of the Machines” and how We Got Here.
When I graduated from business school back in the late ‘90s I never dreamed I would become a total tech geek…in fact, I really thought I was going to be a high-powered consultant (think McKinsey, Bain, BCG). Instead, somehow, I found myself implementing large-scale computer systems (fears of Y2K!) and then became a product manager for a small software company. My journey to tech geekdom had begun without me knowing it.
Fast forward to today. After spending much of my time working on quantitative and qualitative research for large tech companies, I can honestly say that I really love learning and studying technology.
With this in mind, about 2 years ago, a prediction from Gartner (the big technology industry research firm) caught my eye – their analyst Laura McLellan predicted that by 2017 CMOs will spend more on technology than CIOs. She was almost correct – it happened in 2016, a year ahead of schedule.
Think about it. Marketing departments are now spending more on information technology than the department that is responsible for a company’s technology infrastructure. Crazy, I know!
This has led to a proliferation of companies clamoring for a piece of this MarTech pie. From 2011 when 150 companies offered MarTech solutions, we are now in 2019 looking at over 7,000 companies competing in this space.
What is the aim of all these solutions? More importantly, what has changed with CMOs to prompt this massive investment in technology? It boils down to three main factors:
Most CMOs now share P&L responsibility. Instead of just being a “cost center,” marketing is looked on as fundamental part of revenue generation.
Marketing funds and designs the entire cross-functional customer experience (CX). If you think of CX holistically from generating awareness through post-sales feedback, it makes sense that marketing is in charge.
Finally – and arguably most importantly – with the soaring costs involved in attracting, maintaining, and growing the customer base, marketing now has to justify the ROI of their activities.
CMOs are turning to data-driven solutions that help them deeply understand every phase of the customer journey – tracking and quantifying the ROI of all marketing activities along this journey. They are also investing heavily into solutions that personalize the customer’s experience with the hope of converting these interactions into greater sales opportunities.
Technology Solutions and Their Uses
As researchers, we need to know the types of technologies where our clients are spending significant portions of their overall budgets (~30%) so we can recognize where we fit as human insight professionals. We don’t have to be experts in tech, just conversant — so when we walk in the door and our clients say they are using a new “Artificial Intelligence email optimization tool,” we understand what that is and can talk about how our services complement and augment this tool.
I’ve put together a few charts and tables outlining some of the fundamental building blocks of these solutions. Most MarTech offerings are powered by technologies such as Artificial Intelligence, Machine Learning, Business Intelligence, and Real-Time Analytics. I find it useful to see the interaction of these technologies with a chart:
To understand definitions of these technologies and common uses, this table is a quick reference (CAUTION: Tech speak ahead):
Unified customer data platforms, predictive analytics, and contextual customer journey interactions.
Any system that learns from past data to make judgments about previously unseen new data.
Optimize ad campaigns and other metrics, predict churn.
Opportunities for Quallies
Many of the technologies outlined above have inherent limitations – which I like to think of as “opportunities” for qualitative researchers. Most of the limitations center around the data – quality (how good is your data) and quantity (do you have enough of the right type of data). In addition, the other major limitation is having enough marketing content – a major bottleneck in the quest for personalized customer engagement.
Decisions are made solely on data – past and present.
Use the data as a launching point for deeper qualitative analysis.
Existing data is not predictive enough for decision-making.
Create and maintain communities focused on pinpointing predictive behavior.
Need exponentially more messaging content for personalization.
Assist in narrowing target messaging by identifying key characteristics valued by customers.
Insufficient data to train the machine/AI.
Provide personas and other descriptive metrics to help “train” algorithms.
Lack of “industry specific” attributes.
Create detailed feature lists to describe the unique features inherent to that industry.
While the ideas above are great tactical opportunities, strategically, our most important job as qualitative researchers to remind our clients how, in a world of automation, humanizing the experience of individual customers is key to authenticity.
Lisa Horwich is the founder of Pallas Research Associates, a B2B-focused research and consulting firm located in Seattle, WA. She is a self-ascribed tech geek and loves talking to developers, IT decision-makers, and CIOs. She also co-chairs the QRCA B2B SIG.
Posted By Liza Carroll,
Wednesday, April 17, 2019
Updated: Tuesday, April 16, 2019
Design Thinking – Beyond the Breakers
Depending upon the source, Design Thinking (DT) is key to innovation in everything from consumer goods to complex social systems, or it’s an overhyped workshop package. Having first been introduced to the concept at QRCA’s 2019 annual conference, and with the idea that others reading this blog might also be new to Design Thinking, I wanted to share more about it. Design Thinking is meant to place those who seek to engage in innovation – often diverse stakeholders – into an uncomfortable space. It should move people past their own biases so they can understand customers’ real needs, and design solutions that work.
The five steps of the process are most often introduced graphically on brightly colored hexagons: Empathize, Define, Ideate, Prototype, Test. Activities in the first two steps live in the problem space, and the last three are in the solutions space. People who understand the ego-threatening implications of these steps point out that practitioners must be willing to manage controlled chaos in seeking the path to making something great.
Design Thinking is demanding. Yet, it is often sold as a quick fix and its core essential stages skimmed. This is why it is disparaged by some designers and others close to it. Consultancies and companies seeking commercial success without committing to authenticity may champion superficial workshops. Some using the process try to make Design Thinking overly linear, misunderstanding the untamed nature of the creativity that lives within its DNA.
The first step – Empathize – has the most relevance to qualitative researchers — but can also be the most often snorkeled-over by those who don’t have the training or the gear to dive deep. “Empathy is hard!” notes Annette Smith in Is Design Thinking a Silver Bullet for Consumer Research. She explains what we all know better than most: “The ability to empathize without imposing your own cultural values and preconceived notions on a consumer is just not easy to do.” Add cultural difference to the equation, and empathizing is, of course, exponentially more difficult.
Jon Kolko addresses criticism of DT in his article, The Divisiveness of Design Thinking. He asserts that the real work required during the Empathy step might conceivably be exchanged for 2-hour ‘subject matter expert’ interviews; but in taking such an approach, you may only gain a scratch-the-surface understanding of the business needs at hand. Kolko also examines breakdowns that happen in the other Design Thinking steps. In summary, anyone planning to take on the enormous job of leading others through the process would have to have the ability and experience to guide people toward dramatically reframing a problem by asking more interesting questions and to facilitate rich, meaningful collaboration. I recommend reading Kolko’s article to gain a much deeper introduction to the topic than provided in most introductory articles that stick to defining the steps.
Circling back and thinking about Design Thinking’s qualitative heart, it’s interesting that just this month there was a post in the Qual Power Blog by Patricia Sunderland titled When Ethnography Becomes a Joke. In her post, she explains the difference between valuable and degraded ethnographic fieldwork, the methodology that is, as it happens, key to Design Thinking’s Step One – Empathy. Sofia Costa Alves, in her presentation Discover and Deploy Design Thinking described the careful ethnographic work that underpinned the Design Thinking activities she led with participants who were holders of diverse roles in a corporation during her facilitation experience in South America.
Being introduced to Design Thinking, what it can yield when done courageously, and also the ways in which it can be used when thinking “out of the box”, has been a wonderful learning experience. If you would like a list of resources I found valuable for gaining some understanding of Design Thinking, feel free to let me know in the comments or email me at email@example.com.
Liza Carroll is Consumer Insights Manager at RDTeam, Inc.
Posted By Patricia Sunderland,
Tuesday, April 2, 2019
Updated: Tuesday, April 2, 2019
When Ethnography Becomes a Joke
It may or may not be news for readers of this blog — but for at least some clients, ethnography has turned into a joke.
For a number of years, we have witnessed a diminishing appetite for ethnographic work among commercial clients. Competition and challenges from new methodologies are understandable and to be expected. Yet an undercurrent of “we do not want to do ethnography because we tried it and we did not get anything out of it” has been unsettling. More troubling, a few months ago a client put it more bluntly: “No. Ethnography no way. It’s a joke around here when you mention it.”
Ugh. How could the methodology that I learned as an anthropologist and built my career around in the world of qualitative research have become a joke? And even more importantly, what must we do to retrieve ethnography from that dustbin of bad jokes?
Rejuvenate the Basics
Without simply sounding a conservative cry, one thing we must do is go back and ensure that we always deliver on the basics of solid ethnographic work. Ethnographic work seems to have been undergoing a process of lightening in which observation alone, a person alone, or even the word alone will suffice.
Observation and Conversation
Ethnographic fieldwork– as imagined and pioneered by founders such as Bronislaw Malinowski – was never simply about observation. The observational component was coupled with participation, as in participant observation, as well as linked with conversation, interviews, and quite simply put, talk. Observation without any window into what is going on in a person’s mind and heart while they are doing whatever they are doing is anemic at best. Frequently it is also off-base. A key to comprehension in ethnography, as in much qualitative work, is understanding a person’s point of view.
In January 2019, Rachael Lawes provided an outstanding webinar, “Honing Your Ethnographic Eye”. Drawing from discourse analysis, one of the key points of her presentation was the importance of attending to defensively designed statements in speech, for instance, when a person frames what they are saying as “simply stating a fact.” A pre-emptive defense such as this may indicate that the person may feel insecure about the point they are making and/or they may feel that others are likely to argue with what they are saying. Obviously, it is important that we listen – carefully – and not only observe.
Persons and Contexts
Also, while it is an ethnographic basic to understand a person’s point of view, the assumption is not that a person stands alone. When we do our ethnographic work, one of the strengths we can bring to the qualitative research table is to situate a person’s viewpoints and behaviors within a macro-societal as well as meso-social context. This can mean that rather than just studying the person, our unit of ethnographic analysis can and should be the household, the friendship group, the workplace, the family, and/or any social grouping that makes sense for the question and issue at hand.
Injecting Serious Analytic Soul
Beyond being sure to include both conversation and context as part of our ethnographic research, injecting serious analytic soul into the work is also definitely on order. One factor that seems to have fueled the jokes about ethnographic work is the handoff of ethnographic work to junior and client DIY teams. Unfortunately, what can and often does go missing in this handoff is the analytic component.
In much current commercial ethnography, it is almost as if the importance of the analysis has been forgotten. There is a tendency to take ethnographic work as if it is a case of “what you see is what you get.” But, of course, what one sees is filtered by the mind. And while ethnographers must strive for an open mind in order to grasp the point of view of others, they also bring every bit of experience, theory, and knowledge to their encounters and their own mental processing of the data.
For instance, a number of years ago, colleague Rita Denny and I worked on a new product study centered around home organization. The company’s goal was to develop new home storage products. As I observed and talked with people about how they organized items in their homes, it became obvious that spatial orientation (e.g., up versus down; vertical vs. horizontal) was providing critical cues. Items that were “up” were considered more organized than those that were “down.” Items that were vertical were considered ready to use; horizontal or flat signaled “in use.” Items that got stacked were packed. The photos below help illustrate the point.
Vertical hanging on the door – an organized way to keep items that were ready to be taken out of the home.
Vertical files keep papers ready as a resource and what must be done next is kept in front.
A briefcase kept up off the floor seen as neater and more organized than if on the floor. Also kept in vertical orientation.
Lying flat is a signal of “in use” as with a book lying flat on a surface next to the bed (vs. vertical on a shelf, which is “ready for use”). But flat also often leads to “stacked,” which then quickly leads to “packed.”
This spatial insight would not have been as possible without the benefit dof having once read Lakoff and Johnson’s Metaphors We Live By. Lakoff and Johnson examined the way linguistic metaphors organize the way we think about and experience the world. Good moods, for example, tend to be described as “up” and bad moods in terms of “down.” And for the purposes of this example, think about the phrases “picking up” and “cleaning up.”
We need to be ready to bring our analytic minds to the table as we perform ethnography. This is the real value of doing ethnography in business. When we make analysis central to the task, we are able to deliver serious and often breakthrough results. Inductive analytic insight provides ethnography its serious point of differentiation versus other methodologies. Analysis with attention to language and the larger social world (not only observation and the individual) has the power to move ethnography far beyond the realm of jokes.
Posted By Laurie Tema-Lyn,
Tuesday, March 19, 2019
Annual Conference Reporter on the Scene: Step Back to Move Forward: Developing Customer Journey Maps
Bring the POWER of Theater Games to Your Next Session!
Let me start off by saying I am not an actor, although I’ve had some theater training. I earn my living as a researcher, consultant and innovation catalyst, and I’ve been doing that for decades.
I like to bring PLAY into my work as I see the results are well worth it in terms of ramping up the energy of a flagging team, developing empathy, encouraging candid, uncensored conversations and triggering or evaluating new ideas.
Using theater games builds on fundamentals that all face-to-face researchers/facilitators should have in their arsenal. They include:
The ability to build rapport and have fun;
Creating a “safe place” so people feel comfortable expressing themselves;
Being able to read your group through attentive listening and observation;
Being willing to take a risk, knowing that there are no failures — risks lead to opportunities.
Here are tips and techniques to add to your repertoire:
Start with an easy game; I call this one Word Salad. It’s a new twist on the tried-and-true technique of Mind Mapping by adding a pulse — a finger snap — as you capture each participant’s words on a flip chart pad. Breathe and repeat each word or phrase that you are given as you chart. It can be a bit hypnotic. Participants stop self-censoring and by pausing a moment as you repeat the words they listen, reflect and connect. A variation is to use a Nerf ball and throw it to participants to respond. Less time for “thinking,” just gut level responses.
Experiment with Improvs to illuminate brand perceptions, product or service use, or to inform creative strategy or positioning. It’s good to do a bit of pre-planning to identify some people, places, things or situations that you might want to see “acted out” in your work session. Position the exercise as an experiment.Ask for volunteers and give basic improv guidelines including the use of “Yes, And…” to accept or build on their partner’s offers. Remind participants that you are not looking to them to be funny or clever, just authentic to the character or situations. After you conduct a couple of improvs, it’s important to review what all have learned.
Theater of Exaggeration. Try this out to spice up a concept review. You might begin in your typical fashion and then encourage participants topush the boundaries. What are the Most Outrageous Plusses or Benefits to this concept? Conversely, what are the Most Outrageous Negatives to this idea? You just might end up with some new ideas or identify problems that participants had been too polite to suggest earlier.
Mouthfeel: Try this out to help evaluate a name and positioning. This is an improv where participants stand up and have a conversation using a new name or positioning. I recently ran a naming session with a colleague for a social services agency. We had six names in the top tier and were trying to evaluate which were the best. One of the name candidates looked great on paper, but when I asked for two volunteers to improv it (one in the role of a crisis hotline operator, the other a client calling for help) we realized it was a bear… too cumbersome to speak when used in context. We nixed that one from the list.
Spontaneity based on solid preparation. These games work when you mentally prepare yourselfas facilitator, prepare your respondent team by providing clear guidelines of what you are asking them to do, and prepare yourclient team in advance so that they won’t be shocked or worried if you include a theater game to your discussion guide or agenda.
These are just a small sampling of theater games and activities you might bring to your next gig. I encourage you to try them out and make up your own, and feel free to get in contact with me.
Posted By Kendall Nash,
Tuesday, March 5, 2019
Updated: Wednesday, February 27, 2019
Practical and thoughtful, but a walking contradiction. She made it clear that every decision she made had a purpose, and every item she bought met well-defined criteria. As she described her grocery store trips, she recalled the price associated with each and every item. In order to even make it into her cart, the items on her shopping list had to fall within an acceptable and narrow margin. And yet, her eyes lit up and you could see her lost in her memories as she described the unique metal bracelet on her wrist that she had bought on a whim for 250 euro during a trip to Barcelona. She smiled again and told me about how it was made.
Scratching Our Heads
That moment when the consumer tells you something totally incongruent with the story you’ve crafted in your mind of who they are and how they live…
Those comments that seem to contradict each other within a span of minutes…
We formulate clear pictures in our own minds of who a person is and what matters to them, only for them to turn around and tell us something that leaves us scratching our heads.
In my early years as a Qualitative Researcher, I’d find myself frustrated. Seeking patterns and convergence of themes, I was always challenged when things didn’t line up. Sure, I understood things would vary from person to person, but I was caught off guard and perplexed by the number of things that didn’t add up within the perspective of one individual.
Humans Are Messy
Of course, it didn’t take me long to realize what many before me had contemplated – that humans are, in fact, messy. We don’t follow a logical path down the road. There’s not always a reason – or at least not a consistent, or “good”, one. We don’t always make linear decisions. Sometimes we struggle with opposing internal forces that shape our mindsets and behaviors.
But then something beautiful happened.
When I looked more closely at those incongruencies within a single person, there were valuable opportunities for my client to step in and meet the consumer in the midst of the messiness. We identified opportunities for innovative products and delivery, discovered more meaningful ways to connect with those not yet using their brand, and found unique ways to give someone a great customer experience worth talking about. It was actually in those messy places we were finding our most disruptive learning – you know, the insights that make your team say “whoa, yes.” It’s exhilarating to experience those moments when you are onto something that you know will significantly and positively impact your business.
Unveiling the Mess with Qualitative Research
As a fan of both quantitative and qualitative research, I respect the ways both serve in delivering the information we need to make good decisions. Yes, enough people will tell you that quantitative tells you the what and qualitative tells you the why, but it’s so much more for me. Quantitative offers us sound decisions, confidence in direction before we set sail, and a big, delicious slice of the world. The beauty of qualitative is our ability to get in the nooks and crannies. To discover the mess and bring things into the light that just might unlock something truly magical for the brand. The rapport we build with consumers allows us a richer glimpse into what matters to them, so we can become brandsthat matter to them.
Embrace the Mess
Knowing that the messiness of the human heart and mind can be where the greatest potential lies for brands, we can see those moments through an entirely different lens. The next time in research you find yourself with a consumer who doesn’t seem to fit into a perfectly shaped box in your mind, celebrate! When things don’t add up exactly the way you expect them to, celebrate! You are probably onto something really good. And we go after good things.
What about you? Where have you found gold in the messiness of incongruent, inconsistent, yet beautiful human beings?
Kendall Nash is a Vice President at Burke, Inc. in Cincinnati, Ohio. She is an instructor for the Burke Institute and a past president of QRCA. Kendall’s curiosity drives her closer to consumers and their experiences. Her thrills come from uncovering what people truly want and need, and translating that so brands can win.
Posted By Lauren Isaacson,
Tuesday, February 19, 2019
Updated: Friday, February 15, 2019
A friend of mine is a designer who has worked with various divisions of the government of Canada. She told me about working with one particular department. She would show them potential design improvements to existing websites based on qualitative usability tests and they would invariably come back with the question, "How do you know it's better?"
Indeed, how does one know for sure a new website is better than the existing version? As researchers, we know the answer — benchmarking data. However, what's the best way to benchmark the usability of a system? Two methods are commonly used by UX researchers:
System Usability Scale (SUS)
Single Ease Question (SEQ)
System Usability Scale (SUS)
SUS is the most widely used and documented of the two options, with references in over 1,300 articles and publications. It's also free and applicable to pretty much any piece of technology. SUS consists of 10 questions, all using the same 5-point scale.
I think that I would need the support of a technical person to be able to use this system.
I found the various functions in this systemwide well integrated.
I thought there was too much inconsistency in this system.
I would imagine that most people would learn to use this system very quickly.
I found the system very cumbersome to use.
I felt very confident using the system.
I needed to learn a lot of things before I could get going with this system.
The numbering of the questions is essential for calculating the overall score. For odd-numbered questions, subtract 1 from each response and subtract the responses from each even-numbered question from 5. This should leave you with a final score between 0 and 40. This score is then multiplied by 2.5 to increase the range of the score to 0 to 100. This final number is a score and should not be confused with a percentage.
Lucky for us, the good folks at Measuring U have analyzed the responses from 5,000 users evaluating 500 websites and have come up with a grading system to help interpret the scores:
~85+ = A
~75 - 84 = B
~65 - 74 = C, 68 is the average score
~55 - 67 = D
~45 or under = F
If you would like a more official and accurate grading system, you can buy Measuring U's guide and calculator package.
Single Ease Question (SEQ)
The other method is SEQ. Single Ease Question is less commonly utilized and has no documented standard wording, but it has the advantage of being much shorter than SUS. I am always in favor of making surveys shorter. SEQ consists of one question rated on a 7-point scale covering ease of completing a technology-enabled task. Like SUS, it is also free and applicable to almost any piece of technology.
Overall, how difficult or easy did you find this task?
Because there is no documented standard wording of the SEQ, you can tailor the question to cover the metric your stakeholders are most concerned about — confidence, speed, usefulness, etc. The SEQ also pairs very well with unmoderated usability tests often used by researchers who need quick feedback on interfaces.
Measuring U found the average scores across multiple websites to be about 5 (Somewhat easy), but this system is less documented than SUS. Therefore, use it to compare the before and after of a redesign, but not against other sites as you can do with SUS. If you're looking for more than just benchmarking data, you can also add two open-ended questions to the SEQ without risking excessive length.
What would make this website/form/app/system better?
What is something you would fix on this website/form/app/system?
These voluntary open-ends give respondents the opportunity to offer their suggestions about what is wrong with the system and how they might make it better. It provides the potential to understand the “why” behind the data.
In the end, by using either of these UX survey question sets before a system redesign is launched and after, you will be able to tell your stakeholders if a redesign is indeed an improvement over the old, and how much better it is.
Lauren Isaacson is a UX and market research consultant living in Vancouver, British Columbia. Over her career she has consulted for various agencies and companies, such as Nissan/Infiniti, Microsoft, Blink UX, TELUS Digital, Applause, Mozilla, and more. You can reach her through her website, LinkedIn, and Twitter.