Artificial Intelligence
Get up to speed on AI with one of our executive education programmes.
Discover moreProfessor Keyvan Vakili shares what everyone needs to know about AI, and how to guard against its most convincing mistakes

Generative AI tools are not designed to guarantee accuracy, and they conceal errors by presenting them persuasively.
Those who rely most heavily on AI are often those most vulnerable to its flaws.
AI adoption may be high at an individual level, but it’s patchy across organisations.
Artificial intelligence has undeniably hit the mainstream, sparking conversations across the media, around boardroom tables, in political circles – even down your local pub. But how much do most of us really understand about AI? And what are some of the biggest risks posed by these technologies?
According to LBS’ Keyvan Vakili, Associate Professor of Strategy and Entrepreneurship, most people misunderstand the very nature of AI. He begins by breaking down the three main categories of Predictive AI:
Supervised learning algorithms: These are trained using labelled examples. Most traditional AI tools (such as spam filters) work this way.
“How do we teach a kid what an apple is?” Keyvan elaborates; “we tell them, that's an apple, that's not an apple, that's an apple, that's not an apple. And at some point, they know what an apple is, grab it and eat it!”
Unsupervised learning algorithms: These work without labels – the system independently identifies patterns or clusters in data.
“Give a bunch of balls with different colours to a child,” Keyvan explains, “and they will put the red ones in one corner and blue ones in another corner. What they're doing is clustering based on some salient characteristic. You didn't need to say, ‘these are balls, this is red and this is blue. The child can identify which ones look like each other, as well any anomalies.”
Reinforcement learning: This is where the model learns through trial and error, receiving feedback (rewards or penalties) as it tries to solve a problem. Think of a computer playing a video game over and over, learning which actions increase its score. “Over time,” Keyvan says, “the machine can play the game without any instruction. We use this kind of AI in various contexts including autonomous cars.”
Much of the recent hype around AI centres on large language models (LLMs) like ChatGPT, Gemini, Claude and DeepSeek. In fact, when many people nowadays refer generically to “AI”, they’re actually speaking specifically about LLMs.
Technically, LLMs are a form of supervised learning – trained on vast quantities of internet text to predict the next word in a sequence. But in practice, they’re treated as a separate category known as Generative AI, because their primary purpose is to generate content, not to classify or predict outcomes.
“I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes”
Sam Altman, CEO, Open AI on Twitter (now X), 25th October 2023
Keyvan has been working on some of the risks associated with LLMs. “A lot of people believe that these tools are becoming more reliable with every new generation that comes out,” he explains, “but that’s not always the case. These models are not designed primarily to be accurate. They are generating machines – there’s no engine for accuracy in them.”
Discover fresh perspectives and research insights from LBS
“These models are generating machines – there’s no engine for accuracy in them”
LLMs are trained using internet data, which means they’re only as accurate as the content they have access to. “And what’s the main objective of people who produce content on the internet?” Keyvan continues; “it’s not to be accurate, it’s to persuade – to convince people to do something, click on something, consume something, watch, purchase, subscribe…”
What this means, Keyvan details, is that these models have become extremely persuasive, but also prone to making mistakes. And when they make these mistakes, they wrap them up in jargon – they ape the language of an expert or professional – and they’re very convincing.
Keyvan tested this in a recent working paper with his colleague, Professor Bryan Stroube, in which they asked participants a simple maths question. They then presented some participants, randomly selected, with arguments generated by ChatGPT in favour of the correct answer, and other participants with arguments by ChatGPT in favour of the wrong answer. When presented with the arguments for the wrong answer, Keyvan and his colleague “managed to convince more than 50 percent of the people who answered correctly to change their answer,” he asserts.
Keyvan has also conducted research which highlights a critical paradox in how generative AI tools are being adopted. The individuals most likely to use tools like LLMs are often those with the least experience, or lower confidence in their own abilities. While such users might benefit the most from an AI productivity boost, they’re also the most vulnerable to its flaws – and less likely to pick up on the errors it creates.
Compounding the risk, many of these AI users behave like “secret cyborgs”: they rely heavily on AI, while concealing their use of it from colleagues and managers. This makes AI’s influence harder to monitor and manage within organisations, and increases the danger of inaccuracies going unchecked.
“Many people using AI are ‘secret cyborgs’ – they rely heavily on it, but don’t tell anyone”
To summarise, Keyvan distils what we need to know about AI into five key points, based on his research:
1. It’s a misconception that generative AI can do everything.
LLMs are designed for content creation. They sound like us because they’re designed to – but they are not designed to be accurate. They can, however, be very persuasive – even when they’re wrong
2. We need to balance AI’s benefits with its risks.
LLMs can boost productivity and generate competitive advantage, but they can be inaccurate. “Hallucinations” are inherent to how LLMs work – generating plausible sequences of words, not verified facts.
“Hallucinations are inherent to how LLMs work – generating plausible sequences of words, not verified facts”
3. Those who use AI may do so secretively.
The people who gain the most productivity benefit from AI – often those with lower skills or less confidence – can also be the least likely to disclose they’re using it. They may therefore unwittingly be introducing errors into their work. The solution is to integrate AI fully into organisations and design incentives to encourage people to be open about when they’re using it.
4. Adoption will be extremely variable.
While AI adoption may be high at an individual level, it’s often low at an organisational level. This is something the big AI vendors often don’t appreciate, because they use these tools extensively themselves and focus mostly on the technology. But the reasons for technologies not getting adopted are often organisational and managerial– people can be anxious, they may not have the right incentives, or the culture or infrastructure may not exist within their organisation.
5. Human judgment is more important than ever.
In an AI-assisted world, human judgment becomes a critical skill. People need to know how to interrogate AI outputs. Organisations must rethink training to focus not just on technical skills, but on cultivating critical thinking and decision-making abilities to calibrate our use of these tools.
Get up to speed on AI with one of our executive education programmes.
Discover more