AI has the potential to make our lives exponentially easier. From robots that do the work of teams to fitness trackers that tell us how much weight to lose; from apps that monitor our circadian rhythms to dating algorithms that can assess the compatibility of a potential partner, artificial intelligence has the precision, speed, the accuracy and the personalisation technology to dramatically enhance the way we work, live and play. Little wonder it has become so ubiquitous. Effectively the stuff of science fiction only a few decades ago, AI today is reshaping organisational culture – as well as the consumer experiences organisations deliver.
We know about its advantages. We’ve also heard about the risks. Plenty has been written about the potential for huge job loss as the machine replaces the man, the increasing threat of sophisticated cyber-attacks and even the possibility of super-systems going rogue. But how much do we understand about the more day-to-day risks or costs that the use of AI imposes on us as consumers? What are the trade-offs we experience when machine intelligence is embedded in the products and services we use every day?
A team of researchers from Erasmus University in The Netherlands, Ohio State University, Canada’s York University, and London Business School analysed a comprehensive body of relevant research that explores AI-consumer interactions from two lenses... psychological and sociological – to get a better sense of what we gain and lose from AI. We identified four experiences that emerge from these interactions and for each experience we examined the cost-benefit trade-offs that should be on the radar of managers and developers alike. We list these below, together with suggestions and recommendations on how to mitigate negative outcomes.
1. Data capture
Artificial intelligence uses algorithms to process large amounts of past data and identify patterns or features in that data. It learns from these patterns, using them to make predictions about future behaviour that are generally accurate and incredibly quick.
But the data AI captures belongs to us, to consumers. The information parsed is ours – our personal choices, our preferences and our decisions. This is where the tensions and trade-offs emerge.
AI captures our data all the time. And it uses this information about us and our environment to create pleasing experiences – personalised or customised services, information or entertainment. Google’s Photo app, for example, allows Google to capture our memories, but in return offers to take all of the cognitive legwork out of related decision-making: how we manage, store or look for our photos and albums. We get a personalised service without incurring any mental or affective fatigue. But the research shows that data capture can also drive feelings of exploitation – that we are somehow monitored or controlled by systems that we don’t understand, and that we have lost ownership of our personal information in some way. This is down to both the intrusiveness and, at the same time, the lack of transparency and accountability that surround the ways AI can aggregate our data.
So what can managers do to mitigate this effect?
- Be aware. It’s key to strive for greater organisational sensitivity around the issues of privacy and the asymmetry in control over personal data. Responsible organisations would also do well to listen to consumers, at scale and with empathy, and question their own hard-held beliefs.
- Be transparent. Savvy organisations are already working to improve AI data-capture experiences, giving consumers the possibility to opt into specific data-collection processes and to ask for greater clarity on how these data are used. Organisations can limit consumer exploitation by playing an active role in educating their customers about the costs and benefits entailed in AI data-capture experiences.