Think at London Business School: fresh ideas and opinions from LBS faculty and other experts direct to your inbox
Whatever we think about them, we can all agree on one thing: computers are brilliant. They can process vast amounts of data. They can do so at an astonishing speed. (How long would it take you to work out the cube root of 36,264,691? Ages – if at all. Yet within a split second, a computer gives you the answer – 331.) And computers are becoming ever smarter. It is now decades since they showed their ability to do humdrum calculations such as ﬁnding a cube root. Now they can plan the quickest route to drive from A to B, avoiding traﬃc jams, and tell us our arrival time. They can predict pretty accurately how many cartons of milk a shop is likely to sell in two days’ time. They can recognise faces. They can steer a car through heavy traﬃc.
This exponential growth in computing power will not stop. Robots will become increasingly deft at performing tasks we currently see as the unique preserve of humans. They will become more and more skilled in interrogating patterns of behaviour by individuals and organisations and then suggesting better ways that tasks can be accomplished. Artiﬁcial intelligence (AI) will surround us even more than it does today. Within a couple of decades, the tech evangelists maintain, it will be able to replicate everything the human is capable of.
A computer will be able to hold a genuinely stimulating conversation. It will be able to devise and perform a seriously funny comedy routine. It will be able to choose the right clothes for your day ahead, iron them and lay them out ready for when you step out of the shower. It will be able to make you feel loved.
Well, maybe. But let’s be clear about what AI can and cannot do. Its “intelligence” is essentially an ability to process and build upon what has gone before. Certainly, computers already have a capacity for “deep learning” – spotting patterns, interpreting them and coming up with something new. (In the board game Go, for example, computers have already shown they can devise genuinely new strategies.)
Call that creativity if you like, but it is creativity that is conﬁned within a narrow set of boundaries: it is about drawing an inference from past experience.
So how does all this relate to the future of work, the future of organisations and the future of the way those organisations are managed? It is already commonplace for accounts to be drawn up with only the smallest human intervention: AI does the basics. AI can scour thousands of documents to ﬁnd relevant precedents in putting together a legal case. Increasingly, such work and a myriad of other tasks will be executed by AI: the humans who hitherto earned a living doing these things will be redundant.
Those performing more creative, less mechanistic tasks at what we can loosely call the top end of the employment scale should escape this cull of their jobs. Those who work in areas such as providing care for the sick and elderly or serving in restaurants should also continue to see demand for their skills: empathy still counts for something. Hence, as the well-worn argument goes, the increasing application of AI will lead to a hollowing-out of the middle in the jobs market, while those at the top and bottom should see their roles change but endure.
But what does the increasing application of AI imply for companies? Certainly, for a company to survive, it will have no option but to adopt labour-saving, cost-cutting technologies. It will have to strive to match the operational eﬃciency of its competitors, who will all be doing the same thing. And of course, it will also need fewer employees. But all this does no more than get a company onto the starting grid.
To win the race, it needs to make decisions about which customers to target and what new products or services might be devised to attract them. Here, AI’s limitations are revealed. Decisions such as these require intuition, imagination – and, crucially, an ability to pull together items of information from many diﬀerent sources. Lateral thinking involves far more than computing power, however vast. No computer ever dreamed up a cool new brand.
Compare venerable British high street institution John Lewis and Amazon, for example. John Lewis is a retailer. So is Amazon. John Lewis will use computing power for routine tasks such as invoicing and stock control: it tries to achieve operational eﬃciency. But given Amazon’s vast resources, John Lewis will never be able to do more than play catch-up in terms of delivering a given product at a lower cost. In terms of sheer eﬃciency, it cannot beat Amazon.
So, what to do? The crucial point is that John Lewis and Amazon are not mirror images of one another. Amazon delivers goods to your door after you have ordered them online; the whole process has virtually no human involvement.
"Lateral thinking involves far more than computing power. No computer ever dreamed up a cool new brand"
(If drones ever become viable for carrying packages, even the person delivering things to your home may be taken out of the equation.) John Lewis, however, majors on the human – oﬀering advice and allowing a product to be inspected, tried out and compared, before it is bought, all within an agreeable environment.
All these characteristics make John Lewis distinctive, and arguably unique. And if it is to survive and prosper, it has to emphasise these distinctive aspects. Trying to ape Amazon is futile. Instead, John Lewis must identify all its unique qualities – qualities that involve the human touch – and concentrate on developing those. The broad point is this: to survive, a ﬁrm needs to focus on the things it does uniquely well – and, just as importantly, look for new things that it doesn’t do at the moment but where its unique skills could give it a comparative advantage over others. But that involves judgement; no computer can yet provide that. I am sceptical that it ever will.
The most important decisions that a ﬁrm makes will be about where to allocate its resources. And while AI can provide vast amounts of data about what has happened in the past, its predictive powers are limited and do not extend to making strategic decisions.
The evidence for this? Look no further than Facebook. Its algorithms gaily continue to feed its billions of users with material calculated to keep them engaged, providing huge audiences for advertisers looking to reach carefully segmented groups of individuals. AI achieved that brilliantly well. But what it failed to do was spot the potential damage of users waking up to the reality that their personal details were being distributed far and wide; neither did it foresee that Facebook would ﬁnally have to confront the fact that it was being used as a medium for spreading untruths (the misleadingly named “fake news”) and the consequent cost of controlling it, if not eradicating it altogether.
Facebook’s AI systems could not and did not see the threats. It’s not simply that AI failed to spot the elephant in the room; AI was in a completely diﬀerent room. The threats were real enough, but it took the individuals at the head of the organisation – human to a man and woman – to realise, albeit belatedly, that these were issues that had to be tackled. The consequence of their belated acknowledgement? Facebook’s stock market value fell by US$120 billion (£92.5 billion) in a day. And it is now recruiting thousands more people to weed out the untruths and the fake accounts.
The lesson is clear: Facebook’s decision to tackle these crucial issues – and, indeed, its failure to do so much earlier – was a strategic one where AI had nothing useful to oﬀer. AI was and is central to Facebook’s operations, but the most important strategic decisions are taken by humans.
And that brings us back to the key point: deciding where to invest energy and resources requires lateral thinking, intuition and creativity – areas where humans trump machines.
All this is central to the question of how to equip individuals to manage and work within ﬁrms that want to survive and prosper. As AI takes over more and more internal functions within the organisation, business managers will have to devote an increasing amount of their time and energy to exploiting their creativity and intuition. In short, they will have to concentrate on what AI cannot do.
"Robots will become increasingly deft at performing tasks we currently see as the unique preserve of humans"
On top of this, it will be essential for them to understand what AI is capable of doing – and its limitations. This is emphatically not saying that a good manager will have to be a programmer. But she or he will need to have suﬃcient understanding of any AI system at least to be able to evaluate the information coming from it: to what extend can an answer be relied upon?
To use a somewhat trite example, if my SatNav tells me to use a particular route for a journey, I want to know whether it is taking into account traﬃc jams and roadworks. Similarly, if a computer programme tells me that a particular company’s stock is undervalued and therefore worth buying, I want to know on what basis it’s making that judgement. And consider this. If I’m a fund manager and I have a piece of software that indicates when a stock is cheap or expensive, it’s inevitable that I won’t be the only fund manager using the software.
Thousands of my competitors will be doing the same. If that’s the case, any potential proﬁt from following the software’s advice is likely to disappear in an instant. The only way to show an above-average return will be by being a contrarian and taking investment decisions that go against the AI grain. As Terry Pratchett said, “Real stupidity beats AI every time.”
An ability to evaluate the output of AI, creativity, imagination, drawing strands of inspiration from disparate sources and a willingness to challenge orthodoxy – these are all things that managers will need to ensure that their organisation thrives in a world where AI becomes increasingly widespread. But no less important will be the manager’s eﬀ orts to encourage colleagues to give expression to these quintessentially human talents.
That will mean creating a corporate environment in which radical thinking and experimentation are fostered and nourished: mavericks should be given freedom to come up with new and sometimes crazy ideas. Some experiments will fail, but that has to be seen as simply part of the cost of exploiting creativity. In a static state, AI may give sound guidance on where to allocate resources in the short term: its output is rational. But the really important decisions – about how much to devote to research and development or to training, and in which areas – demand very human attributes.
Quantitative information is useful, but don’t ignore the value of qualitative judgement.
And, as more and more information becomes available to an ever-expanding cohort of individuals in a ﬁrm, the role of managers will have to evolve. For generations, managers’ status was bolstered by being the conduit through which information was disseminated, and by their exercise of control. No longer. The value that managers can add will increasingly come from using “softer” attributes to motivate and get the most from their employees.
These human qualities will increasingly be at a premium within a ﬁrm. And, as with the John Lewis example, they will also become more important in dealing with customers, clients and other stakeholders.
Take the case of the GP. When you visit your doctor, she or he will have details about your medical history. They also have access to a wealth of information about people of a similar age, with a similar lifestyle, and so on. From that, with a few keystrokes at a computer, the doctor can infer the likelihood of your having or avoiding a range of medical conditions over the next ﬁve, 10 or 15 years.
Does that mean the GP is becoming an increasingly unnecessary intermediary? Not at all. Certainly, access to all that data allows the doctor to make quick and well-informed judgements about your health prospects. And crucially, it frees up time to build a relationship with the patient. Don’t underestimate the importance of this. Evidence suggests that people who sustain a one-to-one relationship with an individual doctor over time are likely to live longer than people who see a diﬀerent GP each time they visit a surgery.
The march of AI – in medicine, in education, in public administration, in charities, in organisations of all types – will not stop. It presents threats, but it also brings countless opportunities. Humans and all their distinctive qualities will become ever more important in the quest for success.
You must be a registered user to add a comment here. If you’ve already registered, please log in. If you haven’t registered yet, please register and log in.Login/Create a Profile