{"id":7135,"date":"2019-02-21T11:54:32","date_gmt":"2019-02-21T11:54:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/?p=7135"},"modified":"2019-03-18T13:40:38","modified_gmt":"2019-03-18T13:40:38","slug":"6-learnings-ethics-ai-meetup","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/cross-industry\/2019\/02\/21\/6-learnings-ethics-ai-meetup\/","title":{"rendered":"6 learnings from our ethics in AI meetup"},"content":{"rendered":"
<\/p>\n
Almost every single day, stories about AI dominate the news headlines. From driver-less cars to workplace transformation and helping teams achieve more by working smarter together. But for us, it’s not just about the technology.<\/p>\n
AI does some amazing things. However, it’s essential that we don’t fall into the trap of making things better and faster without considering the consequences of our developments. After all, if we do nothing to ensure that AI is safe, aligned to human values, and free from bias, then it has the potential to do more harm than good.<\/p>\n
It only seemed logical for us, as a leading technology company, to hold a meetup to share some guidance around how to implement bias-free AI and why it’s so important. Pratim Das<\/a>, Head of Solutions Architecture, Data, and AI at Microsoft\u2019s Customer Success Unit, and Ben Gilburt<\/a>, Digital Horizon Lead at Sopra Steria, led some fascinating discussions about the ethics of AI and technology.<\/p>\n I wanted to share some of the learnings I took away with me from the event.<\/p>\n Pratim talked about the importance of diverse and inclusive data<\/a>. His key message was that training machine learning applications on biased data will get biased results.<\/p>\n He also listed the six key factors you must consider when designing AI:<\/p>\n <\/p>\n To go deeper into each of these areas and get some practical advice for how to build an ethical framework in AI, I\u2019d strongly recommend you read Pratim\u2019s blog<\/a>.<\/p>\n <\/p>\n <\/p>\n Ben continued this train of thought. He mentioned a few examples where technology failed to be neutral, such as inappropriate \u2018recommended products\u2019 as a result of online shopping algorithms. He talked about building AI with indirect normativity and coherent extrapolated volition. This takes advantage of AI to deliver outcomes we may not see. In simple terms, rather than building AI with our own desires which can be driven by selfish motivations, we should program it to behave how we would want <\/strong>it to behave.<\/p>\n Ben explains it like this: \u201cDo what we would do if we were the type of people we wanted to be; if we had grown up together, and had convergent values.\u201d<\/p>\n An algorithm is only as good as the data it has. Dr Allison Gardner<\/a>, co-founder of Women Leading in AI, took us through the history of women in programming. Women were at the forefront of programming until it became a well-paid and attractive career. The lack of diversity in people who are creating our machine learning models and algorithms means that unconscious bias is present, therefore creating bias models.<\/p>\n Dr Gardner talked about how having a lack of diversity and inclusion at all stages of technology development results in unconscious bias. These biases are then at risk of exacerbating societal biases and embedding inequality in our systems.<\/p>\n \u201cWe need to be really honest about why the lack of diversity, particularly with women, has occurred. If we don\u2019t, we are not going to change it,\u201d she says.<\/p>\n It\u2019s hugely important to change the culture around how computer science is taught and recruited.<\/p>\n \u201cWe also need to regulate the algorithm. We\u2019re coming in with regulation, GDPR, and algorithmic impact assessments which will ensure that we have diversity,\u201d she says. \u201c<\/p>\n Dr Gardner\u2019s session showed me that if we don\u2019t actively think about diversity, our models have the potential to exacerbate bias in society.<\/p>\n \u201cWe should work with people with different backgrounds and skills. This will give us a good chance of preempting any bias,\u201d says Amy Boyd. \u201cDo the proof of concept early, and test as widely as you can.” This was the advice Amy offered us. As a Cloud Developer Advocate in AI and Machine Learning, Amy has a lot of experience in dealing with data.<\/p>\n Amy talked about one of her projects where she analysed tweets to predict the winner of The X Factor each week. She tried to gauge positive or negative tweets using emojis. What she found, however, was that if she didn’t keep an eye on the data and monitor it, it would often produce bias results. If you concentrate on building non-bias models, you will ensure your data remains ethical, and produce better results for all.<\/p>\n Richard Potter, CTO of Microsoft Services, took the session down a more theatrical route. He used Shakespeare\u2019s plays to demonstrate the different types of bias in AI.<\/p>\n Sounds weird, right? And why Shakespeare?\u00a0Different stories represent different types of bias, which connect to real-world examples of bias in AI and data.\u00a0To really bring this to life, Richard got the volunteers from the audience to act out scenes from the Bard’s most famous plays.<\/p>\n Twelfth Night represents pre-existing bias in gender stereotyping and a narrow world-view, which we still see today. What we learn from Twelfth Night is if we addressed this pre-existing bias by inclusive design and impact evaluations, we’d understand the whole picture better. We’d also reach our aims quicker and have better data as a result.<\/p>\n Technical bias from incorrect or non-complete data or a malfunctioning algorithm. And what better play than Hamlet to represent this madness? What Shakespeare is showing us is that the unsoundness of the mind is causing all sorts of chaos. Mix this in with the failure to learn from mistakes and you have a perfect example of technical bias. We can address this by ensuring our AI is well-tested and transparent.<\/p>\n The Bard’s final play, The Tempest, shows us emergent bias. We’re seeing manipulation of characters who then drift into poor outcomes. For example, AI can get manipulated by the very people who it’s supposed to help, like a chatbot who learns bad language from it’s audience. We can address this by ensuring we have ongoing measurement and operational accountability.<\/p>\n Richard\u2019s presentation confirmed to me that bias is everywhere, even in Shakespeare. The end goal is \u2018AI for all\u2019 \u2013 fair and free of bias.<\/p>\n <\/p>\n AI is affecting every organisation. Therefore, ethics needs to be part of the conversation in every industry.<\/p>\n Udai Chilamkurthi, Lead Architect at Sainsbury\u2019s, showed some of the latest AI technology being used in the retail industry. AI can be used to serve customers a great personalised, omni-channel experience. However, it’s important to ensure that this is done with ongoing measurement and with careful consideration. AI doesn’t have the social and emotional intelligence we have. For example, AI might recommend an inappropriate product to a customer a human would know not to.<\/p>\n And last, but certainly not least, was Chiara Garattini, Senior User Researcher at Public Health England. He spoke about AI in medical engineering. It’s incredibly important AI stays ethical in healthcare. It has great capacity to help accurately diagnose and treat patents. But it’s up to us to ensure it is bias-free and works for everyone.<\/p>\n Interestingly, though none of the speakers had met before, they all came to the same conclusion: AI itself is not the problem, the data we feed it is<\/em> the problem.<\/em> Data is reflected by the real world, and if society itself is biased, then how can we hope for truly non-biased data?<\/p>\n Richard sums it up pretty well:<\/p>\n \u201cIn the end, it\u2019s all about us. If we can only talk about AI in a technical language, we\u2019ll never achieve what we need to achieve in this space. We need to go beyond our usual narrative forms and find new ways of telling stories to engage everybody in the development of the technology.\u201d<\/p><\/blockquote>\n This meetup event was co-hosted in London – Microsoft Data & AI (Pratim Das) and AI Ethics London (Ben Gilburt). To stay up to date on future events please join these groups:<\/em><\/p>\n1. Biased data = biased results<\/h2>\n
2. Program AI to behave as we want it to, not how we tell it to<\/h2>\n
3. The need for diversity and inclusion in technology development<\/h2>\n
4. Ethics built in by design<\/h2>\n
5. Using Shakespeare to explore bias<\/h2>\n
<\/h2>\n
6. AI applied to specific industries<\/h2>\n
It all comes back to the quality of the data<\/h2>\n