"Good afternoon, have you tortured any baby dogs today?"
A delightful lead-in to my marktforschung.de column, don't you think? The so-called "social desirability bias" pretty reliably assures us that on this question, the approval rate will tend toward zero.
Biases are cognitive distortions. With less negative connotations, we also speak of heuristics, rash thinking, or mental shortcuts. Many names for similar phenomena that significantly influence all of our perceptions, opinion formation, memories and decision-making processes. Such thought patterns, psychology now describes well over a hundred (!) of them, are not per se good or bad, helpful or harmful. They are above all one thing: human.
Market researchers and dealing with cognitive biases: Absolutely everyday life.
As market researchers, in order to ascertain reality - or rather, the many subjective realities of our study participants - for our clients, we need to know about the full spectrum of these biases, understand them, and ultimately price them into our analysis. This is not only important for concrete survey situations, but even beforehand for the complete coordination with our customers. After all, only the most undistorted possible analysis of where companies or brands actually stand (and where they want to go) promises a successful market research project.
And which heuristics are particularly relevant for us: In addition to the aforementioned "social desirability", for example, the clustering illusion, i.e. the recognition of patterns where there are none at all. Of course, there is also the blockbuster among the "judgment biases", the confirmation bias as confirmation of one's own expectations. Or how about priming, framing and the anchor effect? Plus the equation of correlation and causality, the halo effect and - not to forget - the availability heuristic!
All right, I'll stop already!
There are quite a few more biases, here is our own nice overview of them. Most of them influence not only the design of our questionnaires, the data evaluation and the subsequent recommendations, but simply all processes of market and opinion research. As trained experts, however, we succeed despite everything in realistically depicting human attitudes. In addition to important technical know-how, we are helped by a great deal of empirical knowledge about this error-proneness, seductiveness and convenience that is programmed into human thinking.
Programmed in? There was something there!
Isn't it widely speculated that AI-driven algorithms will soon take over one area after another in the research business as well? So not just the mass collection and processing of data. No, AI apologists are already talking about the fact that in the future even opinions and attitudes of people can be simulated, but WITHOUT asking them!
That is, that programs will learn how comparable participants are very likely to respond in upcoming surveys by endlessly feeding them previous response behaviors. That's machine learning at it's best!
But is it really possible that in the end a cleanly defined computer code simulates the "brain algorithms" of humans? Say, predicting their individual decision-making and thus their judgments and attitudes? Of course, the computer automatically knows the effects of all real existing heuristics through the past data, without which human thinking processes obviously cannot get along. But does it also understand them in their inner logic? Or better said our very own inner illogic?!
Algorithms with intentional "bugs".
To me, it sounds a bit counter-intuitive when a lot of little bugs are intentionally written into a presumably perfect AI. A programmer would probably speak of intentional "bugs" here. And such biases are not always committed or to the same extent. Persons are always subject to their bias depending on the individual context, sometimes on purpose, but more often unconsciously. In any case, not following some simple systematic.
Or maybe they do? At least Daniel Kahneman introduced us all to systems 1 and 2 in his heuristics bible, "Fast Thinking, Slow Thinking." According to his research, people use the faster but flawed System 1 much more often in everyday life, and thus said mental shortcuts. However, if situations appear important enough, system 2 is used to switch quasi-automatically to a higher, more elaborate thinking mode that prevents distortions as much as possible and (hopefully) produces rational results.
Back to market research and the question of whether an AI could actually replicate human thinking systems. Despite all the progress, don't we still need experienced flesh-and-blood professionals to handle, interpret and use opinion-based data? If only because humans are simply better able to put themselves in other people's shoes? Or will it really not be long before the first programs with intuition and empathy are available?
Currently everything seems to be in flux!
At this point, I'll be completely honest: Just because of the latest, spectacular leaps in the development of chatbots & co., I think it's not very serious to predict how things will continue in this incredibly dynamic field of technology. Everything seems to be in flux, and for me, uncertainty is mixed with curiosity and a whole lot of excitement!
Speaking of "uncertainty": this reminds me of my very personal favorite bias, the Imposture Syndrome - in German: Hochstapler effect. Affected people are plagued by great self-doubt regarding their own abilities and achievements. They always think their real successes and achievements have nothing to do with themselves, but merely with luck and coincidence.
From Socrates to the impostor effect.
Sounds sympathetic at first glance, after all it is always better than narcissistically thinking oneself to be the best everywhere. I myself like this phenomenon at the moment, especially because of the aspect of humility. So much is happening so quickly that the following attitude can hardly hurt: "The greater my knowledge, the more I know about my ignorance!" Socrates sends his regards here as well!
At least to me this attitude makes awake, attentive and protects effectively from putting head-in-the-sand. Which, by the way - sorry, I can't leave it alone right now - describes the ostrich effect. In technical jargon, the ostrich bias!
To err is human and human are also the distortions in our heads!
Do we now seriously have to leave even THIS field completely to the computers? No. Although the age of artificial intelligence has only just begun, we market researchers will not soon give up the reins of action - i.e. our customers! - out of our hands any time soon. That's also thanks to another fundamental human characteristic of our brains: creativity.
Or when was the last time a machine wanted to know from you whether you beat cute little puppies?!
Ask sincerely, Your Herbert Höckel
Herbert Höckel is a managing partner here at moweb research GmbH. He has been a market researcher for more than 25 years. In 2004 he founded moweb GmbH, which he is still the owner today. moweb from Düsseldorf operates internationally and is one of the first German market research institutes specializing in digital processes.