
Market Driving Strategies
Explore innovative ways to break new ground in your existing market and move into new ones with our short course.
Please enter a keyword and click the arrow to search the site
Or explore one of the areas below
New research uncovers the hidden trade-offs of AI-enabled consumer experiences
AI is already making our lives exponentially easier in many ways. From robots that do the work of teams to fitness trackers that tell us how much weight to lose; from apps that monitor our circadian rhythms to dating algorithms that can assess the compatibility of a potential partner, artificial intelligence has the speed, accuracy and personalisation capability to dramatically enhance the way we work, play and live. Little wonder it has become ubiquitous.
The stuff of science fiction only a few decades ago, AI today is reshaping organisational culture everywhere, as well as the consumer experiences that organisations deliver.
We know about its advantages. We’ve also heard about the risks. Plenty has been written about the potential for huge job losses as the machine replaces the man, the threat of sophisticated cyberattacks grows, and even the possibility of supersystems going rogue appears.
But how much do we understand about the day-to-day risks and costs that the use of AI imposes on us as consumers? What are the trade-offs we experience when machine intelligence is embedded in the products and services we use every day?
Working with colleagues from Erasmus University, Ohio State University and York University, we have analysed a comprehensive body of relevant research that explores AI-consumer interactions from two lenses, the psychological and the sociological, to get a better sense of what we gain and lose from AI. We identified four experiences that emerge from these interactions, and for each experience we examined the cost-benefit trade-offs that should be on the radars of managers and developers alike. We list these below, together with recommendations on how to mitigate negative outcomes.
AI or machine learning uses algorithms to process large amounts of past data and identify patterns in those data. It learns from these patterns, using them to make predictions about future behaviour that are generally accurate and incredibly quick.
But the data that AI captures belongs to us consumers. The information parsed is ours – our personal choices, our preferences and our decisions. And this is where the tensions and trade-offs emerge.
AI captures our data all the time. And it uses this information about us and our environment to create pleasing experiences – personalised or customised services, information and entertainment. Google’s Photos app, for example, allows Google to capture our memories, but in return offers to take all of the cognitive legwork out of related decision-making: how we manage, store or look for our photos and albums. We get a personalised service without incurring any mental or affective fatigue.
But the research shows that data capture can also drive feelings of exploitation; that we are somehow monitored or controlled by systems we don’t understand and that we have lost ownership of our personal information in some way.
What can managers do to mitigate this effect?
Anyone with a Netflix or Amazon account will routinely receive recommendations about which films to watch or what products to buy. To produce these ultra-customised recommendations, AI uses individual and contextual data to classify individuals into specific consumer types. The danger here is that, rather than feeling understood, classification easily risks a customer feeling misunderstood. Consumers’ perceptions of being classified can reduce the value of recommendations that are supposed to be highly personal. Classification can also lump users together in ways that are incorrect and/or discriminatory. When algorithms classify consumers on the basis of certain traits or features, the fallout can be catastrophic. Apple discovered this to its cost in 2019 when some consumers noted that the Apple Card’s credit terms were biased against women.
What can managers do?
‘Individuals like feeling that a positive outcome, however mundane, is a result of their own ability or creativity’
Apps such as Alexa, Siri and Google Assistant use AI to perform simple tasks that are time-consuming, such as booking a hair appointment, writing an email or consulting a map. But delegating even routine tasks can come at a cost to users. Delegation can feel threatening for various reasons.
Individuals like feeling that a positive outcome, however mundane, is a result of their own ability or creativity; thus, delegating a choice or decision can actually leave individuals feeling unsatisfied. In addition, outsourcing a task can lead to actual or perceived loss of control, mastery and skills.
Three unfortunate students holidaying in Australia made the headlines in 2012 when they drove their car into the Pacific Ocean attempting to reach North Stradbroke Island. Photos of the car fully submerged in the ocean were accompanied by interviews in which the students explained that their GPS had “told us we could drive down there.”
What can managers do?
The film Her gave us a fictionalised glimpse into the curious area of AI-human social interaction. Apps such as Siri and Alexa integrate certain anthropomorphic or humanised features that lend a social dimension to how we use them. This social dynamic can enhance our feelings of engagement with the product, service and the organisation behind them – or not. AI social interaction again treads a fine line between users feeling engaged, or feeling unsettled and even alienated. Take this discombobulating exchange, reported by BusinessNewsDaily in 2020:
Bot: “How would you describe the term ‘bot’ to your grandma?”
User: “My grandma is dead”.
Bot: “Alright! Thanks for your feedback.” (Thumbs-up emoji.)
What can managers do?
AI-enabled products and services promise to make consumers happier, healthier and more efficient. They are often heralded as forces for good – tools to tackle not only the common but even the biggest problems facing humanity.
The potential of AI is undeniable. But so, too, are the dangers of oversimplification and the inherent tendency to efface intersectional complexities of human psychology and sociology and ignore issues such as gender, race, class, orientation and more.
The challenge to managers and developers is to design and deploy AI critically and with care; to be aware, informed and careful that AI can be impaired by our own biases and flaws. AI is only as good as the humans who create it.
Explore innovative ways to break new ground in your existing market and move into new ones with our short course.
Think at London Business School
Want to make 2023 an excellent year, for yourself and the teams you lead? Follow these tips from LBS faculty
By Randall S Peterson, Lynda Gratton, Herminia Ibarra, Dan Cable, Nader Tavassoli, Selin Kesebir
Think at London Business School
Three strategies companies can use to improve their customer experience and shape their brand perception
By Michael G Jacobides, Rene Langen, Nikita Pusnakovs, Yuri Romanenkov
Think at London Business School
London Business School faculty discuss their latest research insights in a new podcast series for Think
By London Business School