In 2006, one year before launching the iPhone to the market, Steve Jobs was promoting focus groups at Apple in order to understand which type of Smartphone people wanted to have. When asked this question, most people would answer: “I want a Smartphone with a big screen, and a large keyboard”…no one really asked for a touch screen phone without a keyboard, but everyone asked for a better and bigger version of Blackberry – a feedback that, if followed, wouldn’t have led Apple into innovation.
The duty of innovators and leaders is not of responding to customer requests and wishes – but rather anticipating what they will need, without them being even able to express that themselves.
How can we do that? Well, this is something accomplished through Sensemaking, which is a new practice of dealing with data, something we will talk about in this article.
In 2010, even though the iPhone had already launched (in 2007), Blackberry was still the market leader, having 40% of the US market, and 20% of the world market, which was really impressive. Even though from 2007 to 2012, the number of Blackberry users increased by almost 10x (from 8 million devices, to 77 million), the problem was that in the meantime, their market share dropped dramatically from over 50% of the market , to less than 10%. But what happened? The thing is that by only looking at internal metrics (which were growing steadily year on year), the leadership did not realize that they were growing much more slowly than the market was growing, due to the widespread adoption of the iPhone.
But Andrea, why data and “sensemaking”, and why not just ask our customer what they want? Well, exactly for the reason I explained at the beginning of this article!
Now we have to better understand what sensemaking data is: it is basically, from its own definition, a process in which people make sense of data. It is the ability of extracting insights from data, and transforming data, which is just the raw material, and does not by itself help us make better decisions, into the starting point of a good decision making process (which are insights).
Sensemaking, a term introduced by Karl Weick, refers to how we structure the unknown so as to be able to act in it. Sensemaking involves coming up with a plausible understanding—a map—of a shifting world; testing this map with others through data collection, action, and conversation; and then refining, or abandoning, the map depending on how credible it is.
The move to the complex occurs as new information is collected and new actions are taken. Then as patterns are identified, and new information is labeled and categorized, the complex becomes simple once again, albeit with a higher level of understanding.
Now, when we put that into the context of Big Data, since we are living so much of our lives online, it’s never been easier for companies to access real behavioral data about what people do, when they do it, how they do it, and how often they do it. Big, quantitative data can answer these questions in real-time based on observed (not interview-based) data and has revolutionized market research in the past decade.
However, there remains one hugely important question that is largely outside the reach of big data approaches: the question of Why? Why do people behave in a certain way, buy a certain product or service, or hold a certain belief? As we all know, this is where the “thick data” that qualitative research generates still has a very important role to play from an insight and decision-making perspective.
Having said that, qualitative research naturally comes with its own set of strengths and weaknesses. Among the latter, the most important is perhaps the lack of width or scale. There is, after all, a limit to how much you can safely generalize about the wider target group based on 10-15 in-depth interviews, 2-3 focus groups, or even an online community.
For that reason, the best research designs have always sought to integrate qualitative and quantitative sources and perspectives, either sequentially or in a mixed-method design.
One such mixed-method approach that we think deserves much more attention in the research community is a method coming to be known as ‘active sensemaking’.
But in the current world of Big Data, there’s a problem: it is just too much data! And this is where the power of AI comes into play.
Let’s look as an example at how A.I. is revolutionizing market research: imagine if a brand had the ability to not only monitor current consumer’s felling, but to predict that future feeling? Imagine if a brand could gather all the current consumer data available and filter it in a way that allowed them to see patterns and predict future trends? The advantages of this when it came to strategy and product development would be unfathomable.
The reality is, this isn’t too far-fetched of an idea. The rapid adoption of emerging technologies such as artificial intelligence (AI) and machine learning in market research has completely changed the way we collect and analyze data. So as these technologies develop, it’s entirely possible that they’ll have the capacity to predict consumer trends in the not-so-distant future. Market research has accelerated at incredible speed over the past several years. It used to be the norm for research to take weeks or even months to complete – the results of which would inform the next 6-12 months’ planning. But in the world of a 24/7 news cycle and round-the-clock social media consumption, businesses no longer have that kind of time at their disposal.
When it comes to data volume, you might think the more data, the better. But the fact is, too much data is simply impossible to analyze. With millions of consumers and millions of data sources each, the sheer volume of data that can be mined is astounding. But if you can’t make any sense of it, you might as well have no data at all. This is where AI comes into play.
AI completely transforms the speed at which data can be analyzed. It breaks data down into easily digestible information, analyzing it and presenting it in a way that delivers rich, detailed, and useful insights. AI can pull together millions of data points in mere seconds —a job that would be impossible for a human to do manually. And it’s particularly useful when it comes to sentiment analysis in open-ended questions — something which can be one of the most time-consuming aspects of reviewing data.
Using AI in this way is quickly becoming industry standard. But the real potential lies in how we can harness AI to predict the future. When combined with machine learning technology, AI learns as it goes, getting smarter and smarter, day after day. As it analyzes the millions of data points from millions of people across the globe, it starts to notice patterns. And it’s those patterns which may hold the key to predicting future insights.
It is because of these advantages that we see more and more investments into AI. Nevertheless, 65% of executives cannot explain how their AI models make decisions. And that’s weird. That’s according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they’re ensuring AI is used ethically.
The findings agree with a recent Boston Consulting Group survey of 1,000 enterprises, which found fewer than half of those that achieved AI at scale had fully mature, “responsible” AI implementations. The lagging adoption of responsible AI belies the value these practices can bring to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t.
This is a big gap that has to be filled, as it is not a matter of just having great insights extracted from AI, but understanding how we got there, and how biased it is not…or is it.