Kate Rosenshine, Author at Microsoft Industry Blogs - United Kingdom http://approjects.co.za/?big=en-gb/industry/blog Tue, 24 Nov 2020 16:31:38 +0000 en-US hourly 1 How to build an AI-ready culture: 5 steps to success http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2020/09/09/how-to-build-an-ai-ready-culture-5-steps-to-success/ http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2020/09/09/how-to-build-an-ai-ready-culture-5-steps-to-success/#comments Wed, 09 Sep 2020 11:47:43 +0000 UK organisations embracing AI are outperforming the competition. Find out how to build an AI-ready culture to be more resilient and innovative.

The post How to build an AI-ready culture: 5 steps to success appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Two women, Di Mayze and Kate Rosenshine sit facing each other in armchairs. There are bookshelves, plants, and paintings behind them.

AI has slowly become part of many organisations’ cloud strategies. And it makes business sense – our AI Skills in the UK report found that UK organisations embracing AI-ready cultures are outperforming the competition by 11.5 percent. Organisations who haven’t already leveraged AI have started to make investments into the technology to be more resilient and innovative.

To be successful in implementing AI, organisations need to address their skills gaps and culture. Our report found over a third of UK leaders believe there will be an AI skills gap in the next two years. 28 percent believe we are already experiencing one.

Successful journey to an AI-ready culture

We have learned from our experiences at Microsoft and WPP and are sharing these to help organisations achieve their goals. We believe those ready for successful change are those which have strategy, performance, and democratisation present and symbiotic in its AI usage.

Strategy: What differentiates you from competitors?

Outcomes: What is your end-goal?

Democratisation: How will you ensure everyone can access AI, including skills?

We strongly feel that without democratisation, you can’t have the other two successfully. Focus on the skills of the people using the technology instead of the technology itself. This will empower change in culture to be more innovative and agile. Take a look at our learnings on how to build an AI-ready culture.

1 – Assess your business

What does your skills mix look like? Compare your business to others at a similar stage and those further ahead of AI maturity. Based on our AI Skills in the UK report, 93 percent of senior executives at AI-leading firms globally say they are actively building the skills of their workers or have plans to. Nearly 70 percent of employees believe they are confident their employers are preparing them for the AI-enabled world.

2 – An AI-ready culture puts people first

A man in a Teams online workshop about building an AI-ready culture.It’s all about the people! Incentivise and empower staff at all levels to learn about AI. Our research finds that those firms that gain the most from AI have also invested in skilling their employees and building a positive, innovation-oriented culture. Share your plans to implement AI and how you will help give employees the skills they need. Listen to feedback and act on it.

3 – Identify ‘champions for change’ genuinely invested in AI

Champions are the front line of change management, your eyes and ears, your feedback loop. Seek people who are naturally interested and enthusiastic. Give them the support they need.

4 – Develop a flexible learning and development program

Give employees the time and freedom to choose how to upskill in AI. Build a programme with a mixture of formal and experienced-based training.

WPP are running two programmes: the AI Academy where they’ve committed to upskill 5,000 data scientists by the end of the year, and the Demystifying AI programme which plans to upskill 50,000 people. Microsoft also runs AI skills programmes, which are available for everyone. The AI Business School helps business leaders understand the value of AI for their industry and the AI School lets you build your own learning path to ensure you develop the skills you need, including no-code and low-code paths.

5 – Create an ongoing culture of experimentation

A female developer works on new projects.Encourage employees to try new things without judgement and learn from the results. Share knowledge, successes, and even failures with the whole organisation. It’s also important that AI is built ethically, with guidelines to ensure people are creating AI to be responsible towards society. Take a people-centred approach to the research, development, and deployment.

Human ingenuity is at the heart of an AI-ready culture

AI disruption is inevitable. And if done properly, can lead to performance gains and agile organisations. Keep people at the heart of your AI strategies. Investing in their skills, and keeping curiosity and creativity as core values. Then, you can start realising your full potential and build a successful AI-ready culture.

Find out more

Uncover the value of AI with the AI Business School

Create a learning path to develop your AI skills

Download the report: AI Skills in the UK

Find out more about WPP and their skilling programmes

Join the conversation at Envision

Digital technology is changing not just how organisations operate but how leaders lead. Join us at Envision, where executives across industries come together to discuss the challenges and opportunities in this era of digital disruption. You’ll hear diverse perspectives from a worldwide audience and gain fresh insights you can apply immediately in your organisation.

Connect with leaders across industries to get relevant insights on leadership in the digital era.

Banner image linking to the Envision event series

About the authors

Di Mayze, a smiling woman with blonde hair and glasses.Di is WPP’s resident data geek. She has over 20 years of technology and data experience across media, FMCG, finance, and retail consulting for companies such as Hearst UK, dunnhumby, and Alpha XR Boots Alliance. ​

Di joined WPP in 2014 as MD of Acceleration (part of Wunderman Thompson) and left in 2017 to become a freelance Data Strategy Consultant.  She didn’t go very far away and having consulted for Wavemaker, VML, Geometry, Wunderman Thompson and MediaCom, Di joined the WPP CTO team and became Global Head of Data and AI in January 2020.​

Di is a creative thinker with proven success in finding new solutions and revenue streams for traditional companies.   She is particularly passionate about getting non-analysts excited about the possibilities of data.​

Di has an MBA from Cranfield School of Management and is a qualified Project Manager and a Neurolinguistic Programme Practitioner. This year she joined Gartner’s Chief Data Officer Group as a Governing Body member and was delighted to be included in the Data IQ Top 100 data leaders list​.

 

Kate RosenshineKate is the Head of Azure Cloud Solution Architecture for Media, Telco and Professional Services at Microsoft UK, working with customers to architect end to end solutions, using Microsoft cloud technologies, with an emphasis on creating solutions that leverage data by using AI.

A behavioural neurobiologist by training, she is passionate about the intersection between technology and business, and how new technologies can shape organisations as they evolve.

In her earlier role at Microsoft, she led the Data and AI Cloud Solution Architecture team for Financial Services. Under her leadership, the team helped organisations shape their data strategies in a scalable and responsible way.

Prior to Microsoft, Kate worked at a start-up that used Big Data to predict commodity flows for Financial Services Institutions, focussing on data fusion, macro-economics, and behavioural analysis. She also holds an MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.

The post How to build an AI-ready culture: 5 steps to success appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2020/09/09/how-to-build-an-ai-ready-culture-5-steps-to-success/feed/ 1
Interconnected data for an interconnected planet http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2020/03/12/interconnected-data-for-an-interconnected-planet/ Thu, 12 Mar 2020 08:00:26 +0000 Discover how a range of technologies such as AI can help unlock data to help address complex intersections in science and technology.

The post Interconnected data for an interconnected planet appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Senior male farmer driving tractor to plow through planted rows in farm field in South Africa.There’s a limited amount of data and metrics surrounding the way we produce, supply, and consume food. Unfortunately, much of this information is fragmented. Until recently it’s been impossible to bring that data together in a meaningful way.

We set up Agrimetrics to help address challenges in the food system using new technology and data to remedy the situation. We highlight how a range of technologies can tackle complexity in the food system and make it more resilient.

Untangling a complex system

While it’s clear that technology is powerful – the challenge is creating effective business models that support solving these problems.

I get excited about the idea that sharing and connecting data can yield insights that would not be possible without the latest technologies. However, it requires a connection to data that is hard to build in a sustainable way. And quite often, people don’t share data without a value exchange. Also, organisations have to weigh the risk vs. value equation in sharing that data – because we all know that data can be misused.

In my previous role as Director of the Centre for Food Security, one of the emerging themes was the complexity of the food system. The lack of sharing data has made the food system inherently unpredictable and vulnerable.

The food system is dependent on factors beyond human control like the weather. Add on to that the fact that food is at the heart of human existence, and issues in the system can have far reaching consequences. The food system is global. It’s not inconceivable that a drought in one part of the world could impact food shortages elsewhere.

Portrait of farm worker holding sickle to harvest wheat in field outside of Delhi. The future of farming will rely upon new technology to improve agricultural output and meet the growing global demand for food.

Serendipitously, I came upon the opportunity to bid for funding to create a Centre for Agri-Informatics and Metrics of Sustainable Intensification. I jumped at the chance. A colleague and I started to discuss how we could use data to reduce complexity. In particular, we wanted to tackle the challenge of reconnecting the farmed, natural, and human ecosystems. These have tended to be managed independently of one another.

These are of course one ecosystem. By building close connections between the ecosystem and its digital representation through data, we can fix this disconnect.

Innovating the industry by combining science and technology

Agrimetrics is a company that is truly at the intersection of science and technology companies. This relationship has always existed, but now it is going beyond the core technical aspects into creating something bigger. We will discuss what this means in practice, and some of the technologies that have changed the way we can build on data.

We are essentially the food and farming sector’s Data Marketplace: a place to find, manage and monetise agri-food data. Our mission is to accelerate the sector’s ability to maximise the value of its data. We want to see a sector where the sharing of data powers the next generation of innovation.

Connecting fragmented data

Making the most of the data in agriculture is harder than in many industries because of its fragmentation. At a much more practical level, we are using technology to provide an infrastructure that supports an agri-food data marketplace.

The key requirements for a data marketplace include:

  • Interoperability of data
  • Connected data
  • Data originators need control over their data
  • Value needs to be exchanged
  • Information needs to be symmetric. Users need to understand the data they are accessing and providers need to know how their data is being used.

All of these requirements bring technical challenges, including the need for detailed permissioning below the level of the data set. The most interesting to me however is interoperability. Many are tempted to think that standardisation is the route to interoperability. However, this imposes rigidity on the data model when there is a range of data in agriculture.

During harvest, wheat bushels are aligned in neat rows by farm workers in field outside of Delhi. Labor-intensive farming methods can be improved with the use of new technologies to make the agriculture industry more efficient, sustainable, and cost-effective.Agricultural data includes numerical data like prices and yields. It also includes things like plant names, which might be in Latin or a local language. There is also the human challenge. Some data-standards that are different in one area compared to another. Persuading one community of users to abandon their cherished standard in favour of another is likely to be problematic.

We are adopting an alternative. By using semantic data models we are providing comprehensive, machine readable descriptions of the entities which the data is intended to represent as well as the relationships between them. The value of this approach is that it begins with a fundamental description of the world which can be made real in many different ways.

The description begins with the general case, for example: a cow produces milk, humans drink milk, and ends at the specific: ermintrude is a cow; ermintrude produced 5000 litres of milk last year; Susan is a human; Susan drank a pint of milk yesterday.

The value of this approach is that it begins with a fundamental description of the world which can be made real in many different ways. For example the cow entity can be specified in any language or it can be made to represent a specific cow by using data. This data can be supplied in any form as the example illustrates with differing units used to measure volumes of milk.

Leveraging machine learning and the power of Azure

To build our data marketplace, we realised we needed a sophisticated and connected dataset. We created a knowledge graph with rich semantics to make data interoperable. A knowledge graph uses machine learning to provide structure and create smart relations throughout the dataset.

Graph databases are quite challenging to make performant and useable. They’re built with SPARQL, which isn’t a friendly language and a small mistake can easily bring down a database. This is why you need to combine a range of technologies to tackle any problems. In our case, we used elastic to allow for rapid querying, SQL to store numerical data, and an API called graphDB to take care of the semantic data.

Azure allows us to scale as our data grows. Most importantly, Azure is built with security-by-design to help us keep the data safe. It even uses machine learning and AI to stay ahead of modern security threats and keep its cybersecurity intelligence up to date

Helping create sustainable food production

A man holds green peas in his palm, which is stained red from handling betel nuts.The volumes of data that are being created today are transforming the way we do science. With plentiful data, we can have much larger samples to produce more actionable insights for the food system.

As we have highlighted above however, making the most of the data in agriculture is harder than in many industries because of its fragmentation. Therefore, we have taken a more practical approach leveraging the power of Azure and machine learning to provide a robust infrastructure that supports the agri-food data marketplace.

By simplifying complex food systems through the use of the data, analytics, and AI we’re improving resilience and helping solve the global challenge of economically, ethically, and environmentally sustainable food production.

Find out more

Four skills needed to use AI for social good

Accelerate competitive advantage with AI

About the author

Black and white photo of a man smiling at the camera with glasses, Dr Richard TiffinRichard Tiffin is Agrimetrics’ Chief Scientific Officer and Professor of Applied Economics at the University of Reading.

Richard read Agriculture at the University of Newcastle and completed a PhD in Agricultural Economics at the University of London. He lectured in Agricultural Economics at both Newcastle and Durham before joining the University of Reading where he was appointed Professor in 2006.

Richard was previously Director of the Centre for Food Security, leading the University of Reading’s strategic research in the area of food security and fostering internal and external collaborations to meet the multidisciplinary food security agenda. His research, which is focused on diet and health policy, has examined the impacts of alternative food policies on land use in the UK and the impacts of both a soft drink tax and a ‘fat tax’ on health in the UK.

Richard’s research group is currently developing an empirical framework to better understand the cognitive underpinnings of dietary choice.

 

Kate RosenshineKate is the Head of Azure Cloud Solution Architecture for Media, Telco and Professional Services at Microsoft UK, working with customers to architect end to end solutions, using Microsoft cloud technologies, with an emphasis on creating solutions that leverage data by using AI.

A behavioural neurobiologist by training, she is passionate about the intersection between technology and business, and how new technologies can shape organisations as they evolve.

In her earlier role at Microsoft, she led the Data and AI Cloud Solution Architecture team for Financial Services. Under her leadership, the team helped organisations shape their data strategies in a scalable and responsible way.

Prior to Microsoft, Kate worked at a start-up that used Big Data to predict commodity flows for Financial Services Institutions, focussing on data fusion, macro-economics, and behavioural analysis. She also holds an MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.

The post Interconnected data for an interconnected planet appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
3 ways organisations can use AI in a responsible way http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2020/01/08/3-ways-organisations-can-use-ai-in-a-responsible-way/ Wed, 08 Jan 2020 08:00:58 +0000 The rise in AI technologies creates more urgency for organisations to understand the implications of AI empowered decision making and how to ensure AI is being used responsibly.

The post 3 ways organisations can use AI in a responsible way appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Since I spoke at techUK’s Digital Ethics 2018 conference the conversation on AI has continued to grow. Research that we recently conducted showed that UK organisations have been increasing their adoption of AI technologies over the past year. The number of companies who now state they have an AI strategy in place has more than doubled – from 11% in 2018 to 24% today, with over half of the organisations reported to be using AI to some degree, indicating that AI is increasingly becoming more accessible.

Is responsibility keeping pace with accessibility?

The rise in AI technologies creates more urgency for organisations to understand the implications of AI empowered decision making and how to ensure AI is being used responsibly. However, many UK leaders lack an understanding of how AI can be used in a fair, responsible and effective way, with two-thirds (63%) not knowing how AI systems reach conclusions.

As AI expands and embeds itself further into daily life, this is a question that nearly every organisation will need to address – how to create responsible AI systems that their staff and customers have confidence in. Defining “Responsible AI” rather than “Ethical AI” captures the wider concepts and approaches that can drive shared responsibility across people, society, industry and government.

Societal values should be baked in

The public debate on the societal impact of AI cannot be ignored by those developing or implementing AI solutions. A new approach to this conversation is required to ensure that AI technologies are aligned with societal values, and that a regulatory regime that both protects citizens and encourages innovation is in place.

We should not approach the need for policy and regulation with fear or with the view that it will hinder technological development. In 1982 the then UK government appointed the leading ethicist Baroness Warnock to chair the Committee of Inquiry into Human Fertilisation and Embryology. The Committee brought together ethicists, scientists, religious and lay leaders and crucially the public, to consider what the rules should be around Invitro Fertilisation. The work of the Warnock Commission culminated in the 1990 Human Fertilisation and Embryology Act, which governs human fertility treatment and experiments using human embryos in the UK but has also shaped much of the global conversation and made the UK a world leading centre for fertility research. This demonstrates the importance of creating the right rules for technology to maintain public confidence and support innovation.

Being responsible and getting things right as the AI appetite grows

When it comes to AI, we have seen an increase in appetite from business leaders to be at the forefront of pioneering AI technologies – from only 14% in 2018, to 28% in 2019. This underlies the increased urgency in getting ahead with AI to enable successful business outcomes, but also created an increased urgency to ensure we get things right with AI. With all the truly amazing progress that has been made in AI over the past year, it is still important to remember that we are at a very early stage of truly understanding the magnitude of impact on our global society should this technology remain unchecked.

To ensure ongoing public trust in their brand, organisations must consider the long-term reputational and cultural benefits of moving beyond just discussing high level principles on the ethical use of AI and focus on what this means in practice when they implement and deploy AI. Regulators have an important role to play here as well – if they take a risk-based approach, that is focused on outcomes rather than technology in order to support innovation, for example anti-discriminatory regulation which is technology agnostic.

3 steps to ensure AI is served in a responsible way

So, where do we go from here to ensure that AI is serving our society in a healthy and responsible way? Organisations must think of AI technology in a holistic way – understanding where AI sits in the value chain and creating the right structures to ensure long-term governance by:

  1. Establishing internal governance, for example by an objective review panel, that is diverse and that has the knowledge to understand the possible consequences of AI infused systems. A key success factor is leadership support and the power to hold leadership accountable.
  2. Ensuring the right technical guardrails, creating quality assurance and governance to create traceability and auditability for AI systems. This is an important part of every organisation’s toolkit to allow operational and responsible AI to scale.
  3. Investing more in their own AI education and training so that all stakeholders – both internal and external – are informed of AI capabilities as well as the pitfalls.

Find out more

Download the report: Accelerating competitive advantage with AI

Ethical AI: 5 principles for every organisation to consider

Kate RosenshineAbout the author

Kate currently leads the Data and AI Cloud Solution Architecture team for Financial Services at Microsoft UK, helping organisations shape their data strategies in a scalable and responsible way. Her main focus lies in the intersection between technology and business, how data can shape organisations and AI systems. Prior to joining Microsoft, she worked at start-ups where she focused on leveraging big data and behaviour analytics to augment decision making. Kate comes from a background in scientific research, specialising in neurobiological genetic engineering. During her research, she studied the influence of genetics on behaviour and survival mechanisms. She holds a MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.

The post 3 ways organisations can use AI in a responsible way appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
4 things holding businesses back from AI implementation (and how to address them) http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2019/11/18/4-holding-back-ai-implementation/ Mon, 18 Nov 2019 08:01:20 +0000 “We’re mostly seeing positive strides in AI,” Dr Chris Brauer noted during this year’s Future Decoded, “rather than giant leaps.”   Our in-depth report into AI, ‘Accelerating competitive advantage’, showed that 38% of business leaders want to be at the forefront of AI innovation. This figure has doubled since last year.  Despite the overwhelming desire to be at the forefront

The post 4 things holding businesses back from AI implementation (and how to address them) appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
“We’re mostly seeing positive strides in AI,” Dr Chris Brauer noted during this year’s Future Decoded, “rather than giant leaps.”  

Our in-depth report into AI, ‘Accelerating competitive advantage’, showed that 38% of business leaders want to be at the forefront of AI innovation. This figure has doubled since last year.  Despite the overwhelming desire to be at the forefront of emerging technologies, many organisations are struggling to successfully implement AI.  

 

Never-ending experimental phase – or not doing anything at all

While 8% of UK organisations consider themselves in the advanced stage of AI use, plenty more find themselves trapped in the experimental phase. 48% of organisations say they’re ‘experimenting’ with the technology. They tinker, they toy, they try – but many struggle to move beyond exploration of the technology into enterprise scale AI.  

I’ve seen many organisations getting stuck in the experimental phase, and very few ever manage to move into a live AI product. Even those that do deploy AI only manage to do so in pockets within the organisation, and face challenges when trying to scale. It’s also apparent that a large section of the businesses we talked to don’t have any AI strategy in place at all; 34%, according to our research, are doing nothing in the burgeoning AI arena.  

This creates some major challenges in demonstrating the business ROI of AI. Without demonstrating wins, as small as they may be, organizations might fail to see the benefits and turn away from this technology.   

So, what’s the best way to advance from being in the 48% – or even the 34% – to the 8%?  

To start, you should approach AI implementation as you would any other on-going digital transformation project. That is to say, this isn’t something to be driven solely by your IT department; it demands buy-in at every level of your organisation, since the technology will empower every job role. This requires thinking beyond the technology itself, and starting to think about the process, governance and roles that need to be in place to foster AI innovation.  

One of my favourite statistics from our report shows that 96% of employees claim managers never consulted them on the introduction of AI into the workplace – and 83% of leaders claim employees never asked about the business’s AI plans.  

Or, to put it another way: organizations need to leverage communication to drive innovation.  

 

Lack of understanding

It’s very easy for organisations to get caught in the hype surrounding AI. The technology itself promises so much – but with that comes a lot of confusion as to what AI actually is, the benefits of AI and what it can actually help businesses achieve.  

As Dr Lee Howls, Head of AI at PA Consulting Group, says: ‘It is worth understanding whether you are just trying to do something for technology’s sake, or if there is a genuine problem that might be solved through AI.’  

While scaling the technology should be approached like any other digital transformation, this is more than just another IT project. AI has gone beyond the technical definition, impacting every employee in every department – from marketing to finance. Therefore, it is fundamentally important that organizations think about AI enablement and education across all roles and functions. Through this understanding of AI organizations will be able to unlock capabilities and potential.  

AI must be used fairly, responsibly, and effectively. The challenge is, many business leaders aren’t entirely sure how to implement the technology in this way. A lack of training lies at the heart of this issue. ‘Accelerating the competitive edge’ reveals that a little over a fifth of UK leaders have fully completed training; they understand how AI complements their job and empowers their organisation.  

On the other hand, two-thirds don’t yet know how AI actually works, and therefore where it would be best placed. Without a fundamental understanding how the system comes up with the conclusions it does (hint: lots of data + lots of compute + algorithms = AI) , it’s impossible to fully recognise the value of AI.    

 

Lack of process and tools

A strong data strategy is what separates advanced AI organisations from their rivals. Not every business is equipped to deliver that. 

Part of the issue here is the ‘novelty’ of AI. The systems have evolved at great pace, so now organisations find themselves playing catch-up. How can we transform AI into enterprise-grade?, leaders wonder. 

The answer, of course, is the introduction of the right processes and tools.  

Hugh Milward, Microsoft UK’s director of CELA, strikes a sympathetic note, saying: ‘It’s hard for a company to make a decision that looks like it is against its own short term commercial interests, but that is the point where ethics really hits the road. Having the right process by which making the “right” decision is eased for the Chief Executive Officer and management of the company is really important.’ 

Creating any sort of AI framework doesn’t end post-launch, AI systems are constantly evolving and iterating on themselves. The launch is only just beginning, and organizations need to think about having the right processes in place to review and refine these systems over time to maximise the value of AI.  

 

Cultural change

Any sort of organisational change can be challenging for employees. With the large scale of change AI presents, your business may be facing a full-blown culture shock.  

Perhaps this is linked to a misconception of how AI should be used. It shouldn’t be used as a replacement for human workers, but to augment their roles and allow humans to use our uniquely human skills to do things that they do best.  

The introduction of AI demands a change of skills and a change of mindset – neither of which happen overnight. In itself, this, perhaps, wouldn’t be an issue. Every business leader understands how change must be managed without damaging morale. However, in our report, 71% of leaders say they’re not sure how to cope with staffing changes and workplace disruption as they drive through AI.  

Thankfully, according to PWC’s 2018 Economic Outlook report, there is a very real chance that “AI will create as many jobs as it displaces. This chimes well with the outlook of both employers and employees who are eager to become AI literate – in those AI-advanced organisations, 66% of business leaders claim to be actively supporting their employees on the path to AI literacy. Meanwhile, 36% of employees state that they’d use the time saved by the technology to learn new skills; 29% believe AI would allow them to take on new responsibilities. On the flip side, however, a little over one in ten workers have completed any sort of educational training.  

The only way to overcome this sort of challenge is for businesses to, first gain buy-in, offer dedicated training on AI systems and ensure you bring everyone on the AI journey. Microsoft’s popular AI business school is an excellent place to start when creating new business opportunities for your business, employees, and customers. 

 

 

Find out more

Download the full AI report, ‘Accelerating competitive advantage’

Watch Kate’s Future Decoded session, ‘Our Intelligent Future: How AI Will Impact Business and Ethics?’ 

 

About the author

Kate RosenshineKate currently leads the Data and AI Cloud Solution Architecture team for Financial Services at Microsoft UK, helping organisations shape their data strategies in a scalable and responsible way. Her main focus lies in the intersection between technology and business, how data can shape organisations and AI systems. Prior to joining Microsoft, she worked at start-ups where she focused on leveraging big data and behaviour analytics to augment decision making. Kate comes from a background in scientific research, specialising in neurobiological genetic engineering. During her research, she studied the influence of genetics on behaviour and survival mechanisms. She holds a MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.

The post 4 things holding businesses back from AI implementation (and how to address them) appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
How to adopt AI at scale – the right way http://approjects.co.za/?big=en-gb/industry/blog/cross-industry/2019/07/02/how-to-adopt-ai/ Tue, 02 Jul 2019 09:00:30 +0000 AI is changing the future of work. Find out how to adpot AI while ensuring it's ethical, transparent and successful in your organisation.

The post How to adopt AI at scale – the right way appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Blogger Series - How To Adopt AI

AI is increasingly becoming a core technology for companies. Today, we are in the early stages of understanding what AI systems will be capable of. Right now, AI is very good at recognising photos, such as identifying people to tag on social media; or words in voice commands for a chatbot. We’re seeing business of all industries adopt AI to transform their customer experience. But we are a long way from having systems that have the general ability to understand the world, use judgement, or be creative, as our research into the UK’s current AI scene, ‘Accelerating competitive advantage with AI‘ shows.

Growth of AI

AI already plays a significant role in many people’s lives, and this is expected to grow. In June 2018 we polled over 1300 people across the UK on their views on AI.

88 percent of those we polled were familiar with the term ‘artificial intelligence’. 79 percent agreed that computers and technology have become smarter in the last five years.

Of those polled, 29 percent described AI as already useful to them. 46 percent expected it be useful in five years’ time.

At the same time, a significant proportion of those we talked to were already making extensive use of smart speakers and virtual assistants, integrating them into their daily lives. Despite this, they didn’t realise they were already making use of AI solutions to answer questions on the weather, transport or who got through the Britain’s Got Talent show the previous night

AI is increasingly becoming a core part of the technology toolkit available to almost every organisation large and small. It is fast becoming a crucial component to remain competitive. We are seeing more interest and deployment of AI solutions. However, it is equally important that organisations rolling out AI carefully consider the ethical and societal consequences of their decisions.

Not just a story anymore

Over half of the public claim that the most common place they hear about AI is through fiction. This technology has the power to disrupt, cause harm, or do good on an unprecedented scale.

Stories like 1984, Ex Machina, or I, Robot undoubtedly cast light on genuine concerns on the importance of protecting privacy or the challenges in designing safe or reliable AI. At the same time, fiction can also easily mislead public understanding of how AI works, and how sophisticated its current ability is today.

AI

AI for good

Reliability and accountability are both crucial to ensure that AI technology is successfully and sustainably deployed and works equally well for everyone.

We shouldn’t be afraid of this new world. Or that we can’t solve issues of liability as a society. We did this when the motor car became widespread. We came up with rules, codes of conduct, and insurance for protection.

Photography depicts the development and design of AI applications.

At Microsoft, we believe taking a human-centred approach is important when you’re looking to adopt AI. It isn’t designed to replace us. It’s designed to extend our capabilities, allowing us to be more creative and innovative

What matters is we have agency over AI. That we acknowledge where to go when things go wrong, and that mistakes can be corrected.

Regulators also have an important role to play here. A risk based approached that is focused on outcomes rather than technology, can encourage and support innovation. For example, in the financial services space companies already take account of anti-discrimination regulation. The regulatory regime simply needs to ensure all regulated businesses understand that this approach is technology neutral and applies equally to existing solutions and those being developed with AI at their core.

AI principles

AI systems are getting more sophisticated and are starting to play a larger role in people’s lives. It’s imperative for companies to develop and adopt clear principles that guide the people building, using and applying AI systems.

Among other things, these principles should ensure that AI systems are fair, reliable and safe, private and secure, inclusive, transparent, and accountable. To help achieve this, the people designing AI systems should reflect the diversity of the world in which we live.

At Microsoft, we believe AI should embody these four principles:

A list of the AI principles

We have also created the AI and Ethics in Engineering and Research (AETHER) Committee. AETHER brings together senior leaders from across the company. They form internal policies and respond to issues. Its aim is to ensure our AI platform and experience efforts remain deeply grounded in our core values and principles and most importantly – benefit the broader society.

One of the ways we are doing this is investing in strategies and tools for detecting and addressing bias in AI systems. AI is a great opportunity. But we need to ensure we always act responsibly for our customers and partners.

AI skills

Contextual image of woman touching screen while working on Black Surface Laptop 2 inside at deskAt Microsoft, we see skills and education as driving tech intensity and AI. And not just about the technical side. It’s important to drive soft skills that help drive innovation and help us make decisions on how to be ethical, responsible, and adopt AI in your organisation in the right way.

We also see ourselves as the technology partner that will also help organisations and partners build their own capabilities. This drives trust, as well as lift the skills base everywhere. We have various free courses and education such as:

AI Academy

Our AI Academy pulls together a collection of courses and learning resources to help you develop the skills you need to work with, and adopt AI so you can fully embrace it’s potential – whether that’s to increase your productivity or create a stronger customer experience.

[msce_cta layout=”image_center” align=”center” linktype=”blue” linkurl=”http://approjects.co.za/?big=en-gb/athome/digitalskills/exceed/” linkscreenreadertext=”AI Academy” linktext=”AI Academy” ][/msce_cta]

Digital Skills

Within the next two decades, 90 percent of jobs will require some level of digital proficiency, while the shortage of technical skills continues to grow. However, while there’s a growing need for digital skills, our own research and experience has highlighted an increasing cloud skills gap.

[msce_cta layout=”image_center” align=”center” linktype=”blue” linkurl=”http://approjects.co.za/?big=en-gb/athome/digitalskills/improve/” linkscreenreadertext=”Improve your digital skills” linktext=”Improve your digital skills” ][/msce_cta]

Correct adoption = better business

AI is a vital step for organisations if they want to succeed in the future of work. From optimising operations and transforming products, to engaging customers and empowering employees, there can be no doubt that AI is set to re-invent traditional ways of working.

However, it must be built on a strong ethical framework with human values at the centre. A framework that protects data privacy, guards against the malicious misuse of AI, and lays out clear guidelines around issues like inherent bias, automation, and where responsibility lies when things go wrong.

Having an ethical and responsible approach when you adopt AI is good for business. Organisations that are investing in establishing the right approach to AI technology now – specifically, by developing underlying values, ethics, and processes – outperform those that are not by 9 percent.

[msce_cta layout=”image_center” align=”center” linktype=”blue” imageurl=”http://approjects.co.za/?big=en-gb/industry/blog/wp-content/uploads/2019/10/CTA-image.png” linkurl=”https://aka.ms/acceleratingai” linkscreenreadertext=”Link to Microsoft’s research into AI in the UK” linktext=”Download the full AI report” imageid=”17871″ ][/msce_cta]

 

Find out more

Maximise the AI opportunity

About the author

Kate RosenshineKate currently leads the Data and AI Cloud Solution Architecture team for Financial Services at Microsoft UK, helping organisations shape their data strategies in a scalable and responsible way. Her main focus lies in the intersection between technology and business, how data can shape organisations and AI systems. Prior to joining Microsoft, she worked at start-ups where she focused on leveraging big data and behaviour analytics to augment decision making.

Kate comes from a background in scientific research, specialising in neurobiological genetic engineering. During her research, she studied the influence of genetics on behaviour and survival mechanisms. She holds a MSc in Molecular Biology from Bar Ilan University and a MBA from Tel Aviv University.

 

This blog was written in collaboration with David Frank (Government Affairs Manager) and Tom Morrison Bell (Government Affairs Manager).

 

The post How to adopt AI at scale – the right way appeared first on Microsoft Industry Blogs - United Kingdom.

]]>