{"id":48881,"date":"2021-05-19T14:29:09","date_gmt":"2021-05-19T13:29:09","guid":{"rendered":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/?p=48881"},"modified":"2021-05-24T15:06:05","modified_gmt":"2021-05-24T14:06:05","slug":"build-responsible-ai-and-data-systems","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/cross-industry\/2021\/05\/19\/build-responsible-ai-and-data-systems\/","title":{"rendered":"6 ways leaders can build responsible AI and data systems and the tools that can help"},"content":{"rendered":"
The power of AI and data to help us solve some of the world\u2019s biggest problems is undeniable. For organisations, it helps them deliver better customer experiences, drive innovation, or free up employees to focus on value driven work. However, responsible AI is an important factor for trust and innovation. According to\u00a0Capgemini<\/a>\u00a0nearly nine out of 10 organisations have experienced an ethical issue around AI. We\u2019ve all seen the media reports about bias algorithms in employment, criminal justice and more.<\/p>\n To build and maintain trust with citizens we \u2013 as a data community \u2013 have an obligation to address these ethical issues.\u00a0Previously, I\u2019ve talked about how to build and effective data strategy and culture.<\/a>\u00a0A critical aspect of both strategy and culture is to ensure the ethical and responsible use of AI and data. We need to empower organisations to use data with a sense of responsibility. The EU recently released their\u00a0Artificial Intelligence Act<\/a>, the first legal framework for AI. In it, they take a risk-based approach to protect EU citizen\u2019s rights while ensuring they can still foster innovation. As we saw with GDPR, the AI Act includes fines for infringements of up to four percent of global annual turnover (or \u20ac20M, if greater). Therefore, it is more important than ever to focus on the responsible use of data and AI.<\/p>\n <\/p>\n Are you using AI technology to do the right things? Is it answering the right problems in the right way? AI shouldn\u2019t be implemented because it\u2019s a shiny new piece of technology. It should be used to help solve a problem. And to work properly, it needs to reflect the community you serve. To do this you need to build your data and AI solutions on ethical principles that put people first.<\/p>\n At Microsoft, one of my focusses as Chief Data Officer (CDO) is to ensure our use of data and AI remains ethical and responsible. What I have found is this is just as much a culture shift as much as a technological process. In a recent webinar, when I spoke with other data leaders across the industry, they also agreed.<\/p>\n What was clear across the board is that organisations need to take a very practical approach to responsible data and AI principles. Below are six principles that organisations can use to build their own responsible AI governance.<\/p>\n Although our society is diverse, it is unfortunately unfair and bias. It is our role to ensure that the systems we develop and deploy reduce this unfairness. However, fairness doesn\u2019t just relate to the technical components of the system. It also about the societal context in which it is used.<\/p>\n \u201cEnsuring the biases are taken care of is important. We think about how data is being increasingly used across platforms and avoiding any disproportional impact as a result,\u201d says Sudip Trivedi, Head of Data and Analytics at London Borough of Camden.<\/p>\n How can leaders ensure fairness? We need diverse teams that question the data and models we are using at every step along the journey. We need to think critically about the implications and unintended consequences more broadly. Having checklists to continually monitor data and AI processes is a great way to ensure we stay fair. Leverage tools and learnings to validate fairness regularly.<\/p>\n AI fairness checklist<\/a><\/p>\n Datasheet fairness checklist<\/a><\/p>\n Fairlearn open-source toolkit<\/a><\/p>\n <\/p>\n Our aim at Microsoft is to empower\u00a0everyone<\/em>\u00a0to achieve more. We are intentionally inclusive and intentionally diverse in the paths we take. AI needs to be built with everyone in mind. Because when you design solutions that everyone can access, the data you collect will be fairer.<\/p>\n This is where your diverse organisation becomes a huge benefit to you. By ensuring that your data and AI teams are diverse you will be building for everyone. And don\u2019t forget to include a diverse audience for your testing to ensure that your systems remain accessible for all.<\/p>\n \u201cIt takes having that diversity within your organisation or stakeholder group to spot issues,\u201d says Nina Monckton, Head of Data Strategy, Advancing Analytics & Data Science at AXA Health.<\/p>\n Inclusive design guidelines<\/a><\/p>\n Design with accessibility in mind<\/a><\/p>\n Our data and AI processes need to be consistent with our values and principles. As owners of these models, we need to continuously check that they\u2019re not causing harm to society. And if they are, we need to have processes to fix them. We\u2019re also transparent with our users on these issues.<\/p>\n Building reliable and safe AI isn\u2019t limited to just physical systems that affect human life. For example, self-driving cars or AI in healthcare. It\u2019s also about ensuring that every model you create stays reliable and safe no matter how big it gets or how many people work on it.<\/p>\n Accelerate the pace of machine learning while meeting governance and control objectives with MLOps<\/a><\/p>\n Preserve privacy with Project Laplace<\/a><\/p>\n Transparency can help us reduce unfairness in AI systems; it can help developers debug systems, and it helps us build trust with our customers.<\/p>\n Those who are creating the AI systems should be transparent about how and why they\u2019re using AI. They should be open about the limitations of their systems. People should also be able to understand the behaviour of AI systems.<\/p>\n \u201cBeing transparent is critical to doing good data work. If you don\u2019t have the transparency, it\u2019s very difficult to know if it\u2019s doing its job well,\u201d says Daniel Gilbert, Director of Data at News UK.<\/p>\n To truly understand AI, we need to democratise through digital skilling. This is not just within your organisation, but within society too. We need to work together to help encourage skills growth across our communities with digital skilling programmes. This will help further increase diversity in our organisations as we introduce people to the opportunities of technology careers.<\/p>\n \u201cA lot of the data we are collecting and using are from people who are digital literate. There\u2019s a real hard question: Is the data we\u2019re collecting really representative of the people we\u2019re trying to provide services for?\u201d says Nina.<\/p>\n Microsoft Learn<\/a><\/p>\n Improve digital skills<\/a><\/p>\n Bridging the digital divide<\/a><\/p>\n <\/p>\n Privacy is a fundamental right, and it must be built in to all our systems and products. With AI, machine learning and the reliance on data, we add new complexities to those systems. This adds new requirements to keep systems secure and to ensure data is governed and protected.<\/p>\n You must think about where and how the data is coming from. Is it coming from a user or a public source? How can your organisation prevent corruption and keep the data secure?<\/p>\n Learn about confidential computing\u00a0<\/a><\/p>\n As leaders, we are accountable for how our systems impact the world. Let\u2019s look at facial recognition. There\u2019s a lot of good uses for it, but only if we stick to principles that guide on how we develop, sell, and advocate for regulation on facial recognition.<\/p>\n Accountability includes internal and external factors. We need to keep key stakeholders informed across the whole cycle of AI systems. And we need to ensure we stay accountable to society.<\/p>\n Mahesh Bharadhwaj, Head of Europe Analytics at Funding Circle talks about asking the right questions at the right time: \u201cAre we using the AI to do the right things? Do we check the models are being built correctly? Are we making sure the model is being deployed on the context it is built?\u201d<\/p>\n Explore interaction guidelines\u00a0<\/a><\/p>\n To build trust, a balance between culture and data capabilities is key. We need to make sure we are encouraging people to leverage data in ethical and responsible ways. These six principles should help you build AI-systems while building a diverse and inclusive culture. By doing this, we will ensure we\u2019re serving our community in the best way possible.<\/p>\n Discover our approach to responsible and ethical AI<\/a><\/p>\n Build a modern data strategy<\/a><\/p>\nBuild your responsible AI strategy with the right question<\/h2>\n
1.\u00a0\u00a0\u00a0\u00a0\u00a0 Fairness<\/h2>\n
Fairness tools:<\/h3>\n
2.\u00a0\u00a0\u00a0\u00a0\u00a0 Inclusiveness<\/h2>\n
Inclusive tools:<\/h2>\n
3.\u00a0\u00a0\u00a0\u00a0\u00a0 Reliable and safe<\/h2>\n
Reliable and safe tools:<\/h3>\n
4.\u00a0\u00a0\u00a0\u00a0\u00a0 Transparency<\/h2>\n
Transparency tools:<\/h3>\n
5.\u00a0\u00a0\u00a0\u00a0\u00a0 Privacy and security<\/h2>\n
Privacy and security tools:<\/h3>\n
6.\u00a0\u00a0\u00a0\u00a0\u00a0 Accountability<\/h2>\n
Accountability tools:<\/h3>\n
Responsible AI builds trust<\/h2>\n
Find out more<\/h2>\n
Resources to empower your development team<\/h2>\n