Artificial intelligence for marketers

New research uncovers the hidden trade-offs of AI-enabled consumer experiences



  • We gain some things and lose others from our interactions with AI
  • Managers and developers should track the cost-benefit trade-offs that arise from these interactions
  • Overhumanising AI systems can reinforce harmful stereotypes and alienate the consumer
  • AI systems must be carefully designed and deployed to take account of human biases

AI is already making our lives exponentially easier in many ways. From robots that do the work of teams to fitness trackers that tell us how much weight to lose; from apps that monitor our circadian rhythms to dating algorithms that can assess the compatibility of a potential partner, artificial intelligence has the speed, accuracy and personalisation capability to dramatically enhance the way we work, play and live. Little wonder it has become ubiquitous.

The stuff of science fiction only a few decades ago, AI today is reshaping organisational culture everywhere, as well as the consumer experiences that organisations deliver.

We know about its advantages. We’ve also heard about the risks. Plenty has been written about the potential for huge job losses as the machine replaces the man, the threat of sophisticated cyberattacks grows, and even the possibility of supersystems going rogue appears.

But how much do we understand about the day-to-day risks and costs that the use of AI imposes on us as consumers? What are the trade-offs we experience when machine intelligence is embedded in the products and services we use every day?

Working with colleagues from Erasmus University, Ohio State University and York University, we have analysed a comprehensive body of relevant research that explores AI-consumer interactions from two lenses, the psychological and the sociological, to get a better sense of what we gain and lose from AI. We identified four experiences that emerge from these interactions, and for each experience we examined the cost-benefit trade-offs that should be on the radars of managers and developers alike. We list these below, together with recommendations on how to mitigate negative outcomes.


1 Data capture

AI or machine learning uses algorithms to process large amounts of past data and identify patterns in those data. It learns from these patterns, using them to make predictions about future behaviour that are generally accurate and incredibly quick.

But the data that AI captures belongs to us consumers. The information parsed is ours – our personal choices, our preferences and our decisions. And this is where the tensions and trade-offs emerge.

AI captures our data all the time. And it uses this information about us and our environment to create pleasing experiences – personalised or customised services, information and entertainment. Google’s Photos app, for example, allows Google to capture our memories, but in return offers to take all of the cognitive legwork out of related decision-making: how we manage, store or look for our photos and albums. We get a personalised service without incurring any mental or affective fatigue.

But the research shows that data capture can also drive feelings of exploitation; that we are somehow monitored or controlled by systems we don’t understand and that we have lost ownership of our personal information in some way.

What can managers do to mitigate this effect?

  • Be aware. It’s key to strive for greater organisational sensitivity around the issues of privacy and the asymmetry in control over personal data. Responsible organisations would also do well to listen to consumers, at scale and with empathy, and question their own firmly held beliefs.
  • Be proactive. Savvy firms are already working to improve AI data-capture experiences, giving consumers the choice to opt into specific data-collection processes and to ask for greater clarity on how the data are used.
  • Be transparent. Organisations can limit consumer exploitation by playing an active role in educating their customers about the costs and benefits entailed in AI data-capture experiences.


2 Classification

Anyone with a Netflix or Amazon account will routinely receive recommendations about which films to watch or what products to buy. To produce these ultra-customised recommendations, AI uses individual and contextual data to classify individuals into specific consumer types. The danger here is that, rather than feeling understood, classification easily risks a customer feeling misunderstood. Consumers’ perceptions of being classified can reduce the value of recommendations that are supposed to be highly personal. Classification can also lump users together in ways that are incorrect and/or discriminatory. When algorithms classify consumers on the basis of certain traits or features, the fallout can be catastrophic. Apple discovered this to its cost in 2019 when some consumers noted that the Apple Card’s credit terms were biased against women.

What can managers do?

  • Be rigorous. Organisations should not assume that their algorithms and processes are bias-free and they should not wait to be told. No matter what’s going on at the policy or legislative level, organisations need to be proactive by collaborating with tech experts, computer scientists, sociologists and psychologists to audit algorithms and root biases out.
  • Be different. Organisations can reboot the classification experience by bursting the bubble and avoiding recommendations that are exclusively based on past choices, which may not reflect present or future predilections and may not provide consumers with optimal variety.
‘Individuals like feeling that a positive outcome, however mundane, is a result of their own ability or creativity’

3 Delegation

Apps such as Alexa, Siri and Google Assistant use AI to perform simple tasks that are time-consuming, such as booking a hair appointment, writing an email or consulting a map. But delegating even routine tasks can come at a cost to users. Delegation can feel threatening for various reasons.

Individuals like feeling that a positive outcome, however mundane, is a result of their own ability or creativity; thus, delegating a choice or decision can actually leave individuals feeling unsatisfied. In addition, outsourcing a task can lead to actual or perceived loss of control, mastery and skills.

Three unfortunate students holidaying in Australia made the headlines in 2012 when they drove their car into the Pacific Ocean attempting to reach North Stradbroke Island. Photos of the car fully submerged in the ocean were accompanied by interviews in which the students explained that their GPS had “told us we could drive down there.”

What can managers do?

  • Keep learning. Certain abilities are intrinsically human and depend on making nuanced judgments in unstructured environments. Savvy firms are increasingly collaborating with museums, theatres and university humanities departments to better understand how AI can preserve, rather than subvert, human values such as creativity, collaboration and community.
  • Keep innovating. A classic marketing research finding revealed that consumers preferred using a pre-prepared cake mix when they were required to crack fresh eggs as part of the process. Why? Because human agency, however seemingly insignificant, can reduce threat of loss of control and mastery and make delegation a more positive experience. In the same vein, enlightened firms are exploring how self-driving cars can be designed to avoid drivers feeling like they have no control over their driving experience.

4 Social

The film Her gave us a fictionalised glimpse into the curious area of AI-human social interaction. Apps such as Siri and Alexa integrate certain anthropomorphic or humanised features that lend a social dimension to how we use them. This social dynamic can enhance our feelings of engagement with the product, service and the organisation behind them – or not. AI social interaction again treads a fine line between users feeling engaged, or feeling unsettled and even alienated. Take this discombobulating exchange, reported by BusinessNewsDaily in 2020:

Bot: “How would you describe the term ‘bot’ to your grandma?”

User: “My grandma is dead”.

Bot: “Alright! Thanks for your feedback.” (Thumbs-up emoji.)

What can managers do?

  • Be informed. To avoid bot “fails”, firms are increasingly informing themselves about the dynamics of alienation. Not only can they collect information directly from consumers who have experienced alienation to gain valuable insights into how and why it occurs, they can also collaborate with experts such as psychologists, sociologists and gerontologists to discover more about the causes and consequences of alienation.
  • Be careful. Anthropomorphism is a double-edged tool. Many designers and marketing managers take it for granted that humanising AI fosters better relationships with consumers. But this is not necessarily the case. Human beings are characterised by a heterogeneity so nuanced and complex that the margin for error is immense. There is also massive scope to draw on and calcify certain harmful stereotypes, such as the use of passive or “subservient” female voices in many AI apps, a possibility that should be on organisations’ radar screens. Progressive firms are increasingly investigating how to make AI gender-neutral and, in some cases, less rather than more humanoid.

AI-enabled products and services promise to make consumers happier, healthier and more efficient. They are often heralded as forces for good – tools to tackle not only the common but even the biggest problems facing humanity.

The potential of AI is undeniable. But so, too, are the dangers of oversimplification and the inherent tendency to efface intersectional complexities of human psychology and sociology and ignore issues such as gender, race, class, orientation and more.

The challenge to managers and developers is to design and deploy AI critically and with care; to be aware, informed and careful that AI can be impaired by our own biases and flaws. AI is only as good as the humans who create it.

MDS mobile

Market Driving Strategies

Explore innovative ways to break new ground in your existing market and move into new ones with our short course.


Think at London Business School

How to lead effectively in 2023

Want to make 2023 an excellent year, for yourself and the teams you lead? Follow these tips from LBS faculty

By Randall S Peterson, Lynda Gratton, Herminia Ibarra, Dan Cable, Nader Tavassoli, Selin Kesebir

Find out more


Think at London Business School

The Why Podcast

London Business School faculty discuss their latest research insights in a new podcast series for Think

By London Business School

Find out more