{"id":4135,"date":"2018-12-04T13:00:19","date_gmt":"2018-12-04T13:00:19","guid":{"rendered":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/?p=4135"},"modified":"2019-03-21T17:50:02","modified_gmt":"2019-03-21T17:50:02","slug":"how-to-implement-an-ethical-framework-in-ai","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/cross-industry\/2018\/12\/04\/how-to-implement-an-ethical-framework-in-ai\/","title":{"rendered":"How to implement an ethical framework in AI"},"content":{"rendered":"

\"Blogger<\/p>\n

AI is already showing its potential for good causes<\/a>. Whether it’s predicting weather impact<\/a>, or optimising transport<\/a> in an intelligent way. Or projecting when maintenance of operational machines will be required<\/a>\u00a0and parts needed in manufacturing, revolutionising healthcare using genomics and microbiome R&D<\/a>. Or supporting\u00a0small businesses with smart and fast access to capital.<\/span><\/a>\u00a0It’s even helping to prevent blindness<\/a>, assisting deaf or hard of hearing students<\/a>, and aiding cancer research.<\/a> Incredibly, we’re also seeing AI being used to help in our mission to save endangered species<\/a> and understand climate change.\u00a0<\/span><\/span><\/p>\n

Yet it’s not without challenges. To ensure AI can only do good, we must first understand the risks and a host of ethical issues related to creating thinking machines and relying on them to make important decisions that affect humans and society in general. We must ask ourselves not what AI can<\/em> do, but what it should<\/em> do.<\/p>\n

‘Should’<\/em> companies have been shown to outperform ‘can’<\/em> companies by 9%”<\/p>\n

– Maximising the AI opportunity, Microsoft UK.<\/p><\/blockquote>\n

\"\"<\/p>\n

Ethics is key to the future success of AI<\/h2>\n

The ability to dissect a conclusion that AI takes, together with the predictability of such an intelligent algorithm and our trust in building and operationalising it, along with a robust legal framework addressed adequately by legislation, is key to future proofing the success of AI.<\/p>\n

Satya Nadella<\/span><\/span><\/a> rightly says, “Unfortunately the corpus of human data is full of biases”. At Build 2018 he also mentioned that Microsoft’s internal AI ethics team\u2019s job is to ensure that the company\u2019s foray into cutting-edge techniques, like deep learning, don\u2019t unintentionally perpetuate societal biases in their products, among other tasks. <\/span><\/span><\/p>\n

Some of the fundamentals of computer science have not been changed by AI, such as \u201c<\/span><\/span>garbage<\/span><\/span> in, garbage out\u201d. However, machine learning and deep learning – that power many AI systems – learn from large data sets.<\/span><\/span> In most situations the more data, the better the predictions and quality of the results.<\/span><\/span>\u00a0If the input data that\u2019s used in training the model has some bias, it’s likely that the outcome will also be biased.<\/span><\/span><\/p>\n

So, how do we build an ethical framework for AI?<\/h2>\n

Let\u2019s look at the key elements required for an ethical framework in AI.\"What<\/p>\n

Fairness<\/h3>\n

You need to ensure AI is built and executed with a fairness lens by considering the following:<\/p>\n