An average person is exposed to over 10,000 image-based impressions a day. By contrast, the average person takes in less than a thousand words and a dozen numeric comparisons per day. We receive essentially ten to 100 times more image impressions than other types of information. This balance of information types is consistent with how the human mind works.
The first decision that a human makes is Momma versus NOT Momma. In the first twenty-four hours after birth, a normal baby can tell its own mother’s face from all other faces. The distinction is made on visual clues like eye and hair color, the shape of the face generally and the shape of the nose, mouth, eyes, and other facial features, specifically. The recognition is a visual discrimination.
Momma vs Not Momma
Then think about a child before they form a sentence, generally 2-3 years old, i.e., sometime in their second year. This is over 500 times later than their first visual discrimination. Then consider how long it is before they can do addition, subtraction, multiplication, and division. It is thousands of times later than the first visual discrimination.
If you think this may be a biased comparison, try this experiment. I have a granddaughter who is nine months old (though she was two months premature). Nonetheless she just said her first word (da-da) and began to crawl in the last two weeks.
I put her in her little play area and placed one cookie at one end of it and four cookies at the other. She went for the four without hesitation. She cannot say four or count to four. She does have a visual construct that allows her to judge MORE versus LESS. We all know more cookies are better, even a 9-month-old. (Note: my daughter only let her nibble on one cookie with her one tooth.)
Our minds are visual, first and foremost, and do not rely on words and numbers to navigate our worlds. If we did, we might not make it to adulthood. Our verbal and numeric skills would come too late. Our visual recognition of threats and opportunities begins far earlier.
Our Industry Has it Backward
The mind operates on visual information more than ninety percent of the time. According to the Massachusetts Institute of Technology (MIT), ninety percent of information transmitted to our brain is visual and the human brain processes images in 13 milliseconds—60,000 times faster than text.
The consumer insights industry gathers 99 percent-plus verbal and numeric data. It is even worse than that. We also ask our questions verbally and numerically. Of the three data types, this makes it the hardest for respondents to answer.
What Are Visual Data?
How can we turn the visual messages into measurable data? The answer is simple and complex at the same time. We need to receive and record the visual information in the same way the brain does.
In Thinking, Fast and Slow, Nobel Prize winner Prof. Daniel Kahneman identifies System 1 thinking as “the brain’s fast, automatic, intuitive approach…” and that “intuition is nothing more and nothing less than recognition.” BUT RECOGNITION OF WHAT?
Our Mind Operates on Visual Structure
Here is an image that represented a breakthrough insight in the appliance industry. The research was about stovetops. The study found that consumers wanted a stovetop that took full responsibility for their safety and protected them from heat and harm. This image summarizes that result, really: snow and a stovetop
The image to the left is how we think we see things, but that is only the conscious (System 2) view that considers the content as image recognition. The brain first sees it as shown below. This is a (partial) System 1 view of the same image.
Neuroscience is the fastest growing segment in the industry, and it observes the process of the brain’s System 1 recognition. Yet it does not tell us what is being recognized. Visual Semiotics does.
Visual Semiotics is the science of Visual Data. In the example, blue and white cause some of the signals in the brain that neuroscience monitors as secure and isolated/safe. Shapes cause some of the signals in the brain neuroscience monitors as separate/protected. Physical context (like distance, dominance, proximity) cause some of the signals in the brain neuroscience monitors as in charge/responsible. We are aware of four other symbol types that complete the System 1 decoding of images.
This Is Not New!
The knowledge of visual constructs shaping our thinking was discovered in the 1960s by psychologists working in the Tavistock Centre (Clinic) in the U.K. They were working to try to develop a better treatment for autism.
They discovered that autistic children broke the world down into fewer symbolic visual structures (“constructs”) than other people. Autistic children also made some constructs totally dominant (being able to see through a window did not differentiate it from a door). Learning how each child decoded the world visually was the key to learning to communicate with them. This also taught the clinicians how the rest of us visually deconstructed the world.
What is relatively new is the language for describing this process, Systems 1 and 2. Dr. Kahneman named it and showed its relative influence on decision-making and economics about fifty years after the process was discovered.
(Note that Construct Psychology is the basis for Behavioral Therapy. Behavioral Therapy is both the most widely used and effective psychological therapeutic method in the world. It is the only therapeutic approach known to help substance abuse, for example.)
The Visual Future
Over ninety percent of the information that helps us through our daily lives is visual. Over ninety percent of the information on the Internet is visual. The visual data available on the Internet makes what we currently call Big Data miniscule by comparison.
At present, we intuitively recognize the meaning of Visual Data. To read and write it, we need to learn a new language, i.e., Visual Semiotics. Visual Data is the future.