Skip to main content

Please enter a keyword and click the arrow to search the site

AI developers: don't forget ethics

Lords AI report urges ethics to become a priority



hero hacker

Human biases can become part of the technology people create, according to Nicos Savva, Associate Professor of Management Science and Operations at London Business School. 

A recent House of Lords Select Committee on Artificial Intelligence (AI) “AI in the UK: Ready, Willing and Able?” urged people using and developing AI to put ethics centre stage. 

The committee suggested a cross-sector AI Code, with five principles that could be applied globally including that artificial intelligence should be developed for the common good and benefit of humanity” and should “operate on principles of intelligibility and fairness”.

The committee’s chairman, Lord Clement-Jones, said in a statement: “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.” He added that “AI is not without its risks”.  

Professor Savva believes that algorithms can generate biased outcomes that, if left unchecked, can amplify over time. He also wrote in London Business School Review: “They do the job they’ve been designed to do. 

“If we crave objectivity and consistency, let’s put the onus back on people to improve the design, use and audit of algorithms.”

Discover four lessons for people developing and using AI

Tech giants such as Google are beginning to educate people about bias, but still have a way to go, he said. 

Professor Savva believes there are three limitations when decision-makers rely on algorithms. “It’s important we understand what they are,” he said. “First is transparency: algorithms are a black box – it’s difficult to know if they’re fit for purpose if we don’t know how they work. Second is bias: algorithms are trained to make recommendations based on data that’s not always representative – systematic biases can go unnoticed and these biases can proliferate over time. Accuracy is third: we treat algorithms as infallible – in reality, they’re only designed to work well on average.” 

Big data, machine learning and data science methods are the best way to deliver faster, cheaper and more personalised customer experiences, Professor Savva further explained. “The existence of large datasets, cheaper computational power and better methods to analyse large datasets makes it much easier to implement at scale now than before. 

“And if you don’t, your competitors and disruptors will.” 

The Lords report also warned of the danger of AI becoming dominated by a few big firms. It named Google, Microsoft and IBM as the current leaders in the field and said there was a danger that the fast-growing industry would be monopolised by a favoured few.

Why lack of competition reduces tech giants’ incentive to continually improve

Tech titans such as Google have amassed huge amounts of data needed to train AI-based systems and “must be prevented from becoming overly powerful” said the report.

So when should people rely on algorithms? “When mistakes can be observed and feedback is reliable, fast and actionable,” said Professor Savva. 

“Use the algorithm as one part of the decision-making process, understand the algorithm’s limitations and present algorithmic output in a way that reflects these limitations, be prepared to audit and promote the right for human review. Place value on human judgement and a test-and-learn approach.”

Select up to 4 programmes to compare

Select one more to compare
×
subscribe_image_desktop 5949B9BFE33243D782D1C7A17E3345D0

Sign up to receive our latest news and business thinking direct to your inbox