Skip to main content

Are we ready for a world shaped by AI?

Our Think Ahead panel considered what we know today about AI and how it will affect businesses and the choices leaders make.

Save to my profile
Four professionals in a podcast setting with purple "Think Ahead" backdrop and wooden wall features.

In 30 seconds

  • Humans above the loop, not in the loop: to achieve productivity gains leaders must retain agency and control

  • More leadership responsibility, not less: every decision we make will now lead to a greater impact if it is made with the use of AI

  • Tasks, not jobs: as AI automates some tasks, other tasks that still need to be done become more valuable

Developments in artificial intelligence (AI) move so fast that it can be hard to keep up with them and focus on the most important issues at the same time. For our latest Think Ahead event, we gathered three expert panellists to help shed some light on this rapidly changing technology: Nicos Savva, Professor of Management Science and Operations and also the Academic Director of London Business School’s Data Science and AI initiative; Lilia Christofi, EMEA Financial Services Data, AI and Tech Partner at PwC Consulting; and Iryna Tsyganok EMBALS2023, CEO and Co-Founder of AI consultancy, Irysan. Their discussion was led by the journalist and author Stefan Stern.

Nicos set out clearly how he viewed the central challenge at this time. “The technology is evolving very fast in ways that I find exciting, but also a bit scary,” he said. “And what I find a bit scary is the gap I’m seeing opening between the technology’s capability on the one hand, and our ability to adopt it effectively in complex human-based systems. I see a problem in the short run, which revolves around how to best prepare and evolve our organisations and our skillsets to best use the technology. But I’m also worried about the long-term implication. How does society look in a world where we have an abundance of AI?”

Bringing her systems thinking disciplines to the discussion, Iryna Tsyganok observed: “AI really amplifies good systems, it amplifies bad systems, and it forces organisationsto look at what they currently have. Is it good enough? Is it ready for AI?”

Lilia Christofi’s work with a broad range of financial services clients in different markets in EMEA led her to ask a practical question about the deployment of AI: “How do we make sure that we create humanistic experiences for consumers that are actually helpful and ethical, and help them grow their wealth? It’s quite a big challenge.”

The first of two audience polls revealed that 38% of those watching were “experimenting with pilots and proofs of concept”. Only 19% in this anonymous online poll said they were “unsure where to start”. Perhaps the high quality and expertise of the viewing audience explains the low score for this answer.

In terms of practical application of AI, Lilia felt there is a move away from proof of concepts to now “really getting into the core processes and starting to build out scalable solutions”. Progress is not easily achieved, but “there’s definitely a lot more ambition than there was last year, and investment behind it from a strategic top-down perspective to drive AI transformation.”

Nicos pointed to the challenge of regulating this emerging technology. He said: “The technology is evolving so fast that, if we are honest, we don’t really understand how to tackle these [regulatory] problems. So we need a lot more research on this. We need an open mindset. And the playbooks are being rewritten as fast as the technology is being created.”

There was also a call for balance and a sense of proportion in the discussion. Yes, AI “hallucinations” are a problem, but humans get things wrong too, Lilia observed. But Iryna warned that “the impact of AI is much greater than an impact of a single human.”

There has been much discussion of the need to have a “human in the loop” when it comes to decision making, but Lilia argued powerfully that this was an inadequate approach. Consider a contact centre, she said: “You are having thousands of interactions being pumped through. You can’t have a human in the loop for all of those decisions. It’s impossible to achieve that productivity gain you’re aiming for and have them in the loop. So you have to think about human above the loop, which means you are managing by escalation and by exception. It’s being able to define those boundaries in a good way, to be able to control that, which is really important.”

How should we think about AI? Nicos said we should not consider it as a system like a calculator that always provides the right answer if given the right input, but as a junior colleague who is eager, intelligent, but often lacks context and doesn’t necessarily have the judgment that a senior leader has through years of experience. “Therefore it needs to be guided carefully.”

Iryna flagged another issue worth thinking about. “One of the problems with AI, for me, is that it is very convincing. And it is almost correct. That ‘almost-correct’ is in many cases good enough, but in some cases it’s not good enough.”

Mistakes will be made by organisations if they expect to see big improvements simply by giving staff access to AI, Nicos said. “AI adoption is not giving a Copilot licence to the legal department, the customer service department, the sales department, and hoping that we’re going to see a big productivity change,” he said. “It comes out of thinking end-to-end how the value is created and trying to redesign the way we create this value now that we have this technology available to us.”

Lilia agreed: “We have this new capability that didn’t exist before, and let’s not plaster it into our existing processes to make them a little bit better, a little bit faster. Let’s try to think what else is possible now that this technology exists.”

Nicos added: “Assuming we know how we want the AI to behave, there’s a second problem, and that’s the alignment problem, which is: how do we make sure that it does what we want it to do? I don’t think we are investing in the safeguards and in understanding how to regulate and control and align the technology as much as we should. That should worry us all.”

A second audiencefound that 43% of the audience had difficulty translating AI ambition into real operational change. But for a significant 33%, redesigning work, skills and accountability was just as problematic.

So what will happen to human beings in this technological age? Iryna felt that humans will still be needed. “I don’t see the human factor disappearing at all,” she said. “I think if anything it will be much more pronounced, and much more responsibility will be shifted onto humans to make the right decisions too. Because every decision we make will now lead to a greater impact if it is made with the use of AI.”

Lastly, Nicos urged us to distinguish between the specific tasks AI can carry out and the jobs people will still do. “As AI automates some tasks, the rest of the tasks that still need to be done become more valuable,” he said. “That could well increase the number of professionals we need. So it’s not a foregone conclusion that AI makes jobs obsolete, but it’s a foregone conclusion that it changes how they work.”

The discussion encompassed the central challenge: the gap between how fast AI capability is advancing and how slowly our institutions are adapting. Transformative AI is on the horizon, with some arguing that artificial general intelligence (AGI) already exists, but AI governance is not there yet. How do organisations keep up when many are still struggling to make basic AI tools to deliver meaningful returns? The honest move is to build institutional capacity fast and to admit we don’t have all the answers as the first step.

Discover fresh perspectives and research insights from LBS

Nicos Savva

Nicos Savva

Professor of Management Science and Operations; Academic Director, Data Science and AI Initiative

Lilia Christofi

Iryna Tsyganok

Stefan Stern

Sign up to receive our latest news and business thinking direct to your inbox

close