The governance gap at the heart of the AI boom
We built governance for environmental harm over two decades. The equivalent for technology’s social footprint is still missing.

In 30 seconds
We have a sophisticated governance architecture for environmental impact now, but here is no equivalent yet for AI’s social footprint.
Developing this governance will be challenging because it’s unclear who to hold to account and the harms are complex and often invisible.
Companies could conduct risk assessments; invest in real-time tracking and allocate resources for mitigation; and share governance at industry level.
According to Gartner, global spending on artificial intelligence will reach $2.5 trillion in 2026. That figure captures infrastructure, software, services, and devices. How much of that investment is accompanied by serious efforts to understand or govern the social consequences of what is being built is anyone's guess.
Over the past two decades, companies have constructed a sophisticated governance architecture for environmental impact. Reporting standards such as CSRD, ISSB, and TCFD give investors and regulators a shared language for evaluating how firms interact with the natural environment. We have, in other words, the institutional infrastructure to hold companies accountable for their environmental footprint. But the equivalent for technology’s social footprint appears to be almost entirely absent. That gap is about to matter a great deal.
“Technology tends to generate enormous social value and serious social damage simultaneously, and often from the same source.”
Part of the reason this gap has persisted is that technology’s social externalities are different from environmental ones in a way that makes them harder to govern. Carbon emissions are a pure negative; the governance objective is straightforward: reduce them. Technology, by contrast, tends to generate enormous social value and serious social damage simultaneously, and often from the same source. The AI system that helps a doctor diagnose a rare condition faster is the same class of system that could displace thousands of radiologists. The governance challenge, then, is calibration: how to preserve the value while mitigating the harm. That is a much harder design problem than elimination.
And the difficulty runs deeper still. Two structural features of technology’s social externalities seem to make them especially resistant to the accountability frameworks we have built for other domains.
The attribution problem
The first of these is the attribution problem. Environmental harm follows a traceable causal chain: a factory emits pollutants, they enter a river, a community downstream is affected. With technology, the causal picture is muddier. The harms tend to be emergent, arising from the interaction of a technology with millions of individual decisions, existing institutional structures, and cultural contexts.
Consider professional services, where AI is reshaping how work gets done. McKinsey now runs 20,000 AI agents alongside 40,000 human employees, saving an estimated 1.5 million working hours last year. The productivity benefits are tangible. But if, over time, this contributes to fewer entry-level hires, a narrowing of professional skills pipelines, or a hollowing out of the apprenticeship model that has sustained these industries for decades, who bears responsibility?
“Agency is distributed across the entire ecosystem, and that makes it genuinely difficult to assign accountability.”
The technology company that built the tool? The consulting firm that deployed it? The clients who demanded the efficiency gains? The junior professionals who are now competing with software for tasks that were once their training ground? Agency is distributed across the entire ecosystem, and that makes it genuinely difficult to assign accountability in the way we have learned to do for environmental harm.
The visibility problem
The second is the visibility problem. Environmental harms eventually made themselves visible in ways that were hard to ignore: smog over cities, oil-slicked coastlines, rising temperatures. These physical manifestations created political urgency. Technology’s social harms seem to operate differently. They unfold slowly, beneath everyday perception, entangled with other social and economic forces.
“Technology’s social harms unfold slowly, beneath everyday perception, entangled with other social and economic forces.”
IMF data from January 2026 suggests that automation disproportionately affects entry-level workers, at two to three times the rate of their managerial counterparts. The World Economic Forum projects that 40% of current workforce skills could become obsolete within five years. These are large numbers. Yet the transformation they describe is gradual enough, and diffuse enough, that it rarely triggers the kind of acute public response that an oil spill or a heatwave produces. By the time the effects become legible, the technology is so embedded in economic life that separating its contribution from everything else becomes a puzzle of its own.
If this diagnosis is broadly right, it suggests that conventional accountability tools will need to be rethought for this new context. There are three promising directions worth exploring:
Three directions for companies to explore
One is anticipatory assessment. If tracing harm backward is structurally difficult, there may be value in mapping it forward. A company preparing to deploy AI across its operations could, before doing so, conduct a structured assessment of the foreseeable social consequences: what might this mean for the junior hiring pipeline in five years? How could it reshape the distribution of expertise within the industry? What happens to the professionals whose current tasks the system is designed to absorb?
This would be modelled on the environmental impact assessments that are now routine in infrastructure projects, adapted for the particular complexity of social outcomes. It would require intellectual honesty and a willingness to sit with uncomfortable answers. But even an imperfect assessment would represent a significant advance over the current default, which in most organisations amounts to no structured consideration at all.
A second direction could involve ongoing monitoring paired with financial provisioning. If the harms are slow-moving and hard to detect, companies could invest in systems that track social outcomes over time, and commit resources, proportionate to the scale of deployment, to mitigation as impacts materialise. There is a useful analogy here with how banks provision for expected loan losses: they set aside capital today against risks they expect to crystallise tomorrow. Something similar could apply to the social disruptions that transformative technologies are likely to produce. This would represent a shift from after-the-fact corporate social responsibility to forward-looking risk management.
A third possibility is shared governance at the industry level. Gartner projects that 80% of organisations will have formalised AI policies by 2026, but in most cases these are internal guidelines with no external verification and no shared standards. Given that the social consequences of technology tend to be systemic rather than firm-specific, there may be a case for accountability structures that operate across companies: shared measurement frameworks, mutual audit commitments, collective funding for impact research. The financial sector’s experience with prudential regulation after successive crises offers a rough model, imperfect but instructive, of what industry-level governance can look like when the risks are too large and too interconnected for any single firm to manage alone.
“The companies that begin building this infrastructure now will likely have a meaningful say in how accountability is defined.”
None of this will happen through goodwill alone. The history of environmental accountability is worth remembering: the reporting standards, the emissions frameworks, the disclosure requirements we now take for granted all emerged under regulatory pressure. Companies that engaged early helped shape the architecture in ways that reflected commercial realities. Those that waited had blunter, more prescriptive rules imposed on them.
There are good reasons to believe that technology’s social accountability will follow a similar path. The companies that begin building this infrastructure now, however imperfectly, will likely have a meaningful say in how that accountability is defined. The rest will spend the next decade explaining to regulators, shareholders and the public why they did not.
Discover fresh perspectives and research insights from LBS


