AI in market research should help humans (not replace them!)

I know that’s not what you’re expecting to hear from an AI company focused squarely on market research, but hear me out…

Our industry is at a crossroads of two very different visions for the future – one in which AI systems conduct research from start to finish and largely replace the researcher, and another in which AI is woven throughout the research process to assist the researcher at each step.   

This tension was on clear display at IIEX where I listened to dozens of pitches and case studies on how AI can be used in pursuit of understanding other humans, and I was reminded of a framework for the potential uses of AI that was created by one of our investors – and all around thoughtful human – Roy Bahat.

Roy offers three metaphors for different categories of potential AI systems:

  1. “Looms”: AI that generally seeks to replace a person just as a loom fully replaces a weaver. Examples include self-driving taxis or AI customer support agents

  2. “Slide rules”: AI systems that assist or empower a person just as a slide rule makes a calculation (Roy’s showing some age here, but in his defense, slide rules got us to the moon). Examples include grammar checkers and coding copilots.

  3. “Cranes”: AI systems that unlock the possibility for a person to do something previously impossible to do themselves. Examples include language translation or discovering thousands of new proteins in a flash. 

None of these three categories is intrinsically better or worse than another, but as AI rapidly transforms countless industries, we need to be thoughtful about which is the right approach to best solve different kinds of problems. 

A thoughtful approach to AI in market research

Historically, the market research industry has been dominated by “slide rules” – survey platforms and analytics tools that have enabled researchers to field research and analyze data more efficiently. Occasionally a “crane” would come along like online sample markets, which unlocked the ability for countless researchers to collect data themselves. 

Some at IIEX seemed most excited about the possibility of AI to create research tools that are “looms” – for the first time fully replacing the human researcher.

There were many flavors of this product paradigm that featured different mediums (text, video, voice, etc) and promised a similar magic – just pose a general question to the AI oracle, it will handle all the messy details, and return with the answer.

For example, I spoke with numerous companies offering “fully automated interview” platforms that leverage AI chatbots to autonomously conduct interviews, and then use LLMs to summarize the resulting chats, without any human guidance along the way. 

But between the curated demos and polished case studies, I heard a different question over and over from potential customers of these tools:

Is removing the researcher from the research really a good thing?”

This question is deeper than plain job security anxiety. It asks us to consider what critical insights are missed and business opportunities overlooked in a world of research “looms.”


Is removing the researcher entirely from research really a good thing?

To answer this concretely, let’s consider one of the key techniques underlying these “looms” called summarization, which does exactly what it sounds like – it takes a large amount of text, and condenses it into a much shorter summary that is approximately representative of the original data set.

AI summarization is an incredibly powerful tool with many valuable applications. And it also differs in important ways from how a human researcher works. Fully exploring these differences could (and should) occupy a dozen more pages, so I’ll focus on just two things that an average human researcher has in spades and AI summarization is not designed for or capable of.

Humans have strategic intuition for what signals matter most (and AI doesn’t).

Human researchers possess vast amounts of highly specific contextual knowledge about their industry, company, products and the strategic considerations they are facing at any given moment – all of which shape the research questions they are pursuing. As a result, they have a uniquely fine-tuned intuition of what signals in the data are meaningful and actionable.

Unsupervised AI summarization systems lack such specific and deep contextual knowledge and as a result are prone to skip over signals that could be the most meaningful and highlight those that are less so. 

Humans are intuitively good at subtlety and nuance in language (and AI isn’t).

Good researchers are great at finding subtle differences in language that unlock critical insights. A keen researcher will know that a respondent saying, “I’m frustrated because you keep raising prices” is very different from one saying “I’m frustrated because prices are too high.” The former is price change fatigue, while the later is a price-to-value problem. And each has very different strategic solutions.

Fully automated summarization on the other hand performs the opposing task of combining similar concepts for greater simplicity and brevity. These systems are all about lumping together similarity, not parsing subtle and important differences.

So, when summarization alone is used as the primary tool for research analysis it will by its very nature collapse nuance into simplicity and obscure potentially key insights along the way. 

Taking just these two differences, it’s clear that in a world of research “looms” built on unsupervised summarization we would be substitute machine computation for human intuition, and abstract summarization for nuanced detail.

We might gain speed (though that’s not certain), and we would certainly lose many critically valuable insights in exchange. 

Unsupervised AI oracles obscure what they miss by their nature.

Perhaps the most troubling aspect of fully automated research tools is that most of us wouldn't know what we’re missing. 

When we adopt “loom” technologies we ask the AI questions, and it returns polished answers that we accept because the effort required to find what was missed would defeat the purpose of using a “loom” in the first place, and so we run with it – never knowing we missed critically valuable insights below the surface of summarization.


Human-in-the-loop is the AI solution for market research

This future of research “looms” doesn’t sound like a better world to me. And from much of the chatter I was hearing at IIEX, many researchers potential buyers of these tools aren’t convinced either. 


This is why Fathom is focused squarely on being both a “slide rule” and “crane” that unlocks the ability for researchers to understand people in their own words at scale with unparalleled nuance and accuracy by weaving the best of human intuition and strategic judgment with the power of AI at every step of the process.

While others are working to automate the researcher away, we are puts the researcher at the center of our AI platform and empowering their strategic intuitions and innate curiosity to shape at every step of the research process.


Reflecting on the competing visions for the future of research that I saw at IIEX last week, it’s clear that our industry is at a crossroads between a future of research “looms” in which we automate the core of the research process versus a future of  “cranes” that empower human researchers at every step of the process. 

At Fathom, we’re betting everything on building a world where AI is a force multiplier for the unique power of human curiosity, intuition, and empathy. I hope you’ll join us.

Interested in seeing Fathom in action? Book a demo to unlock your limited free trial!

 



Previous
Previous

ESOMAR’ 20 questions for AI-based research and insights products

Next
Next

Fathom joins ESOMAR