From paperclips to AI risk
Dr Linda Yueh on ethics, cyber resilience and why old technology is suddenly useful again

Measured words, serious risks
A light touch can still carry serious weight, and London Business School's Dr Linda Yueh demonstrated exactly that in a wide ranging conversation on SiriusXM’s Business Briefing, where a relaxed exchange with host Janet Alvarez became an incisive tour of the real risks and responsibilities shaping artificial intelligence today.
Welcoming Yueh back to the programme with seasonal good cheer, Alvarez set a conversational tone, but what followed was not small talk. Yueh used the conversation to untangle some of the most pressing questions around AI governance, cybercrime and the growing gap between technological capability and institutional preparedness.
One of the central themes of the discussion was the absence of global standards for AI. While the technology itself moves seamlessly across borders, governance remains fragmented, national and often reactive. Yueh pointed out that the UK has taken an early lead in this area, establishing AI safety initiatives and convening international conversations that bring governments and private firms into the same room.
Yet safety, she argued, is only part of the story. Much public debate focuses on what AI produces, such as recommendations, predictions, automated decisions, while far less attention is paid to what goes into these systems in the first place. The ethics of how generative AI models are coded, trained and constrained is an area still lagging behind their rapid deployment.
From paperclips to systemic failure
That distinction matters. Popular culture is full of familiar cautionary tales, from superintelligent machines to runaway algorithms. Yueh referenced a recently published book bluntly titled If Anyone Builds It, Everyone Dies, which revives the classic thought experiment of an AI instructed to make paper clips and optimising the task until the world itself becomes raw material - literally, a world full of paperclips! The example drew laughter on air, but the point was precise: poorly specified objectives can produce catastrophic outcomes, even without malicious intent.
The conversation then shifted from theory to practice. AI is not just a hypothetical future risk but a present day tool in cybercrime. Yueh noted how convincingly deepfakes can now replicate voices with only seconds of audio, making impersonation scams far harder to detect. The problem, she suggested, is not just technological sophistication but human trust.
Recent cyberattacks on major UK firms illustrated the scale of the challenge. In some cases, companies were forced to shut down entire IT systems for months. One supermarket chain reverted to paper invoices across its stores simply to calculate tax liabilities, a striking reminder that digital resilience still depends on analogue fallbacks.
Old tools for new threats
From these incidents, Yueh drew two practical lessons. First, businesses must build redundancy into their systems, planning not just for breaches but for how to continue operating once they occur. Second, verification protocols may need to become almost quaintly old fashioned. If a CEO appears on a call demanding an urgent transfer, it may soon be prudent to ask for a company “safe word”. Not because it is foolproof, but because it reintroduces friction into processes that criminals increasingly exploit for their speed.
As Yueh put it with dry understatement, what looks like old technology may turn out to be the most reliable safeguard. In an age of cloud storage and synthetic voices, a handwritten check or a spoken password might once again have a role to play.
The full discussion can be listened to here: bit.ly/4p64G5Z

