{"id":20931,"date":"2020-01-08T09:00:58","date_gmt":"2020-01-08T08:00:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/?p=20931"},"modified":"2019-12-23T12:54:20","modified_gmt":"2019-12-23T11:54:20","slug":"3-ways-organisations-can-use-ai-in-a-responsible-way","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-gb\/industry\/blog\/cross-industry\/2020\/01\/08\/3-ways-organisations-can-use-ai-in-a-responsible-way\/","title":{"rendered":"3 ways organisations can use AI in a responsible way"},"content":{"rendered":"
Since I spoke at techUK\u2019s Digital Ethics 2018 conference the conversation on AI has continued to grow. Research that we recently conducted showed that UK organisations have been increasing their adoption of AI technologies over the past year. The number of companies who now state they have an AI strategy in place has more than doubled \u2013 from 11% in 2018 to 24% today, with over half of the organisations reported to be using AI to some degree, indicating that AI is increasingly becoming more accessible.<\/p>\n
The rise in AI technologies creates more urgency for organisations to understand the implications of AI empowered decision making and how to ensure AI is being used responsibly. However, many UK leaders lack an understanding of how AI can be used in a fair, responsible and effective way, with two-thirds (63%) not knowing how AI systems reach conclusions.<\/p>\n
As AI expands and embeds itself further into daily life, this is a question that nearly every organisation will need to address \u2013 how to create responsible AI systems that their staff and customers have confidence in. Defining \u201cResponsible AI\u201d rather than \u201cEthical AI\u201d captures the wider concepts and approaches that can drive shared responsibility across people, society, industry and government.<\/p>\n
The public debate on the societal impact of AI cannot be ignored by those developing or implementing AI solutions. A new approach to this conversation is required to ensure that AI technologies are aligned with societal values, and that a regulatory regime that both protects citizens and encourages innovation is in place.<\/p>\n
We should not approach the need for policy and regulation with fear or with the view that it will hinder technological development. In 1982 the then UK government appointed the leading ethicist Baroness Warnock to chair the Committee of Inquiry into Human Fertilisation and Embryology. The Committee brought together ethicists, scientists, religious and lay leaders and crucially the public, to consider what the rules should be around Invitro Fertilisation. The work of the Warnock Commission culminated in the 1990 Human Fertilisation and Embryology Act, which governs human fertility treatment and experiments using human embryos in the UK but has also shaped much of the global conversation and made the UK a world leading centre for fertility research. This demonstrates the importance of creating the right rules for technology to maintain public confidence and support innovation.<\/p>\n
When it comes to AI, we have seen an increase in appetite from business leaders to be at the forefront of pioneering AI technologies \u2013 from only 14% in 2018, to 28% in 2019. This underlies the increased urgency in getting ahead with AI to enable successful business outcomes, but also created an increased urgency to ensure we get things right with AI. With all the truly amazing progress that has been made in AI over the past year, it is still important to remember that we are at a very early stage of truly understanding the magnitude of impact on our global society should this technology remain unchecked.<\/p>\n
To ensure ongoing public trust in their brand, organisations must consider the long-term reputational and cultural benefits of moving beyond just discussing high level principles on the ethical use of AI and focus on what this means in practice when they implement and deploy AI. Regulators have an important role to play here as well – if they take a risk-based approach, that is focused on outcomes rather than technology in order to support innovation, for example anti-discriminatory regulation which is technology agnostic.<\/p>\n
So, where do we go from here to ensure that AI is serving our society in a healthy and responsible way? Organisations must think of AI technology in a holistic way \u2013 understanding where AI sits in the value chain and creating the right structures to ensure long-term governance by:<\/p>\n
Download the report: Accelerating competitive advantage with AI<\/a><\/p>\n