Skip to main content

Why AI strategy is really a leadership design problem

Most organisations start their AI journey with the wrong questions. Evidence from a cement manufacturer and a financial analytics firm shows that AI success depends less on technology choices and more on deliberate leadership design.

Save to my profile
Two professionals in a modern tech office discussing information on a laptop, with monitors and equipment in background.

In 30 seconds

  • AI advantage is a leadership design problem, not a technology choice. The most important decisions are how AI is embedded in real workflows, not which models or vendors are selected.

  • Successful organisations balance speed and control in AI experimentation. The right approach depends on the cost of error versus the cost of delay in the underlying business.

  • AI only creates value when users trust it. Adoption is driven by system design, guardrails, and human oversight—not by mandates or technical performance alone.

Artificial intelligence is often framed as a technology choice. Which model should we adopt? Which vendor should we work with? How quickly can we deploy?

Two recent London Business School case studies suggest that these are the wrong starting questions. Sustainable AI advantage rarely comes from acquiring superior models. It comes from designing how AI is used in real decisions—and under what conditions it can be trusted.

One case examines UltraTech Cement, a large-scale industrial company embedding AI into plant operations across a vast manufacturing network. The other explores Crisil, a global analytics firm whose reputation depends on the reliability of its client-facing insights.

The industries could not be more different. One optimises kilns and logistics networks; the other supports financial decision making. Their approaches to AI innovation differ on the surface yet both converge on a common insight: AI innovation is guided by a shared set of leadership design choices.

Across these organisations, three leadership design choices stand out for executives and board members.

Design for AI experimentation: the cost of error versus delay

Both organisations started from the same constraint: limited internal AI capability - yet they chose very different paths.

UltraTech leaned into external partnerships and rapid experimentation. By working with startups and testing solutions in real plant environments, it accelerated learning while keeping failures small and contained. Internal teams focused on integration, adoption, and governance, while external partners provided specialised innovation capacity. This approach aligned with the organisation’s operational discipline, enabling experimentation within a culture of prudent decision-making.

Crisil took the opposite route. Given the reputational and regulatory consequences of analytical errors, it prioritised internal development and careful iteration. The organisation chose to experiment iteratively with internal teams, ensuring that outputs met the standards required for client-facing financial workflows.

In industrial settings, errors are often operational, contained, and reversible, while the cost of slowing experimentation can be high. In financial analytics, the cost of error is visible, consequential, and difficult to undo, while slower experimentation carries far less risk. UltraTech therefore optimised for learning speed, while Crisil optimised for learning control.

AI experimentation is not optional. The leadership task is to balance the cost of being wrong with the cost of moving too slowly.

AI impact beyond ROI

A second barrier to AI progress is the overemphasis on immediate financial ROI. When every initiative must justify itself through near-term returns, organisations slow experimentation and filter out the very ideas that build long-term capability.

UltraTech addressed this by broadening how it defined impact. Rather than relying solely on financial metrics, it framed AI initiatives around three outcomes: cost reduction, improved decision-making, and improved process effectiveness. Use cases that demonstrated multiplicity across plants were prioritised, allowing early successes to spread quickly and build organisational confidence.

Crisil applied the same logic in a different context. Instead of aiming for full automation, it focused on augmenting analyst productivity in a core business service. Even partial gains across tasks, such as data compilation and a faster initial draft, freed up analyst time for higher-quality judgement, which ultimately drives client value.

Seen together, the cases highlight a broader insight: early AI value is organisational before it is financial.

Initial gains show up as productivity improvements, better decision processes, and growing confidence in AI systems. These enable learning, which in turn makes future experimentation more effective.

Leaders who insist on immediate ROI risk stalling this learning cycle. Those who broaden their definition of impact create the conditions for AI innovation to emerge.

AI is a trust problem

A common assumption in AI adoption is that better models will automatically lead to better outcomes. Both cases show that this is incomplete.

The real challenge is not building AI systems. It is ensuring that end users trust those systems enough to rely on them in their own decisions.

This is where AI differs fundamentally from traditional digital transformation. In most systems, adoption means learning how to use a tool. In AI systems, it means deciding when to trust its judgement.

Both organisations recognised this early and involved end users from the outset. Crisil embedded analysts directly into the development process, ensuring outputs aligned with real analytical expectations. It designed AI solution for reliability with a combination of guardrailsand human-in-the-loop workflows.

UltraTech worked with plant-level champions who voluntarily tested early solutions and validated them in real operating conditions. Adoption was not mandated. Solutions were trusted only after they demonstrated consistent performance in real operations and confidence spread through visible results, not training.

In both cases, adoption was not imposed. It was driven by trust built through experience.

Guardrails and human-in-the-loop design were not seen as constraints, but as means to bridge the trust gap.

AI systems are adopted when users trust them in their own workflow—not when organisations deploy them at scale.

What boards should be asking now

As AI investments grow, oversight must evolve. Three questions can anchor a productive board conversation.

  1. How are we fostering AI experimentation? Where can we afford to move fast, and where must we prioritize control?

  2. How are we defining AI impact in the early stages? Are we enabling learning and adoption or constraining innovation through premature ROI expectations?

  3. What are we doing to ensure users trust AI in their daily decisions? How are we reducing the cost of relying on AI prone-to-error outputs through system design, guardrails, and human oversight?

The experiences of UltraTech and Crisil demonstrate that AI success is neither accidental nor incidental. It emerges from deliberate leadership and executive choices about how organisations experiment, define impact, and build trust.

In an era where models are improving rapidly, the enduring source of advantage will not be who adopts AI first. It will be who designs it best.

Discover fresh perspectives and research insights from LBS

Key Takeaways

AI experimentation is not optional. The leadership task is to balance the cost of being wrong with the cost of moving too slowly.

Leaders who insist on immediate ROI risk stalling this learning cycle. Those who broaden their definition of impact create the conditions for AI innovation to emerge.

AI systems are adopted when users trust them in their own workflow—not when organisations deploy them at scale.

Nitish Jain
Nitish Jain

Associate Professor of Management Science and Operations

S. Alex Yang
S. Alex Yang

Professor of Management Science and Operations; Chair, Management Science and Operations Faculty

Sign up to receive our latest news and business thinking direct to your inbox

close