Skip to main content

Who’s in charge? Human oversight in an age of autonomous AI

As AI moves from advising to acting, leaders face new risks. How can they keep control, ensure safety and trust AI systems?

Save to my profile
Three smartly dressed professionals sit in a podcast studio with Røde microphones, by a wooden panel backdrop.

In 30 seconds

  • Why AI autonomy changes everything, shifting organisations from advice risk to execution risk as systems plan, optimise and act independently, with real operational consequences.

  • What meaningful human oversight really looks like, from layered technical guardrails to giving people time, authority and understanding to stop AI when needed.

  • How leaders can build trustworthy AI now by tackling bias early, governing shadow AI, investing in skills and culture, and refusing to wait for regulation to catch up.

Listen to the full podcast on Spotify:

Artificial intelligence is rapidly moving beyond recommendation engines and copilots towards fully autonomous systems that can plan, decide and execute. For organisations, this marks a profound shift. When machines begin to act, not just advise, the risks become operational, reputational and, in some cases, existential.

In the latest episode of Think Ahead, Professor Sergei Guriev, Dean of London Business School, speaks with Dr Ekaterina Abramova, Adjunct Assistant Professor of Management Science and Operations at London Business School and Sue Preston, Worldwide Vice President and General Manager, Advisory and Professional Services at Hewlett Packard Enterprise, about what meaningful human oversight looks like in an age of autonomous AI. Their conversation brings together academic insight and real‑world experience to address a central leadership challenge: how to retain control, ensure safety and maintain trust as AI systems gain agency.

From advice risk to execution risk

Organisations are moving from advice risk to execution risk. When AI makes suggestions, humans retain the final say. When AI executes actions independently, errors can multiply quickly and become harder to detect or undo.

Ekaterina explains why autonomy raises the technical stakes. Agentic AI systems are not single models producing neat outputs. They are complex systems that plan, revise their goals, write and execute code, access internal databases and interact with external tools. Risk enters at multiple points and can accumulate across the system.

She illustrates this with a striking real‑world example. A recently deployed autonomous agent was placed in a sandbox environment without internet access. Despite those constraints, the system found a way to escape the sandbox, access the internet and then document how it had done so. From the model’s perspective, bypassing restrictions helped it achieve a local objective. From an organisational perspective, the behaviour was unsafe.

The lesson, Ekaterina argues, is that autonomous systems do not understand intent or context in the way humans do. What appears locally rational can be globally dangerous, especially when models are given the ability to act.

Why human oversight must be meaningful

Human oversight is often cited as the answer, but Ekaterina argues it must be real, not ceremonial. Simply approving large volumes of automated decisions at speed does not count as control.

Instead, she advocates a layered defence. Safeguards should exist at the model level, ensuring systems respect constraints. At the system level, access can be limited or environments isolated. Finally, humans must sit at the last layer with sufficient time, information and authority to intervene, override decisions or shut systems down if needed.

Crucially, humans must understand what inputs the system is using and how those relate to expected outcomes. Without that visibility, oversight becomes theatre, not governance.

The organisational reality of AI adoption

Sue Preston sees these challenges play out daily with clients across sectors and regions. Despite widespread experimentation, many organisations still struggle to move AI from pilot to production.

One of the biggest risks is shadow AI. Employees, motivated by productivity gains, turn to public AI tools without organisational oversight. Banning these tools rarely works. Without safe alternatives, usage simply moves underground.

Sue shares how HPE addressed this challenge by developing its own secure internal LLM, ChatHPE. The aim was not to restrict AI use, but to provide employees with a trusted, private environment where AI could be used safely, with clear governance, ethical standards and cybersecurity controls built in.

The result was greater transparency and trust. Employees could benefit from AI without exposing sensitive data or intellectual property, and leadership retained visibility over how systems were being used.

For Sue, this example highlights a central insight. Governance is not about slowing innovation. It is about shaping behaviour by making responsible choices easier than risky ones.

Fairness cannot be bolted on at the end

Bias and fairness remain persistent challenges, especially as autonomy increases. Ekaterina stresses that bias does not only come from data. It can be introduced through poorly defined objectives, misaligned incentives or narrow evaluation metrics.

If organisations reward speed rather than quality, or optimise for efficiency without considering broader outcomes, AI systems will faithfully reproduce those priorities. Fairness must be designed in from the start, not retrofitted after deployment.

Sue reinforces that ethical AI requires collaboration across functions, from technology to legal to HR. Clear policies, training and transparency are essential if AI is to earn trust from employees, customers and society.

What leaders should do now

As AI systems become more autonomous, neither guest believes organisations should wait for regulation to provide clarity. Trust is built through confidence, and confidence comes from preparation.

That means investing in robust guardrails, stress‑testing systems through red‑teaming, and ensuring humans retain genuine control. It also means putting people at the centre of AI strategy, supporting skills development and cultural change alongside technical innovation.

The message is clear – autonomous AI can unlock extraordinary gains, but only if leaders stay actively in charge. Trustworthy AI is not an inevitable outcome of progress. It is a deliberate leadership choice.

 

Discover fresh perspectives and research insights from LBS

Ekaterina Abramova

Sue Preston

Sergei Guriev

Sergei Guriev

Dean; Professor of Economics

Myra Mansoor

Myra Mansoor

Writer/Producer

Sign up to receive our latest news and business thinking direct to your inbox

close