Sun, Nov 24, 2024 | Jumada al-Awwal 22, 1446 | DXB ktweather icon0°C

How AI can change the world

Juan M. Lavista Ferres,the Corporate Vice President and Chief Data Scientist of the AI for Good Lab at Microsoft, on the difference a little bit of tech can make in our lives

Published: Fri 8 Nov 2024, 9:13 AM

Updated: Fri 8 Nov 2024, 10:17 AM

  • By
  • Yash Wadhwani

Top Stories

Artificial intelligence is changing the landscape of how we think, process and analyse data. And at the forefront of the shift in thought is AI for Good Lab at Microsoft, which uses AI to further data collection for things like crises and climate change, offering hope of quicker response times at crucial moments. Juan M. Lavista Ferres, Corporate Vice President and Chief Data Scientist of the AI for Good Lab at Microsoft, leads a team of scientists and researchers in the domains of AI, Machine Learning, and statistical modeling. His editorial leadership shapes the Microsoft Journal of Applied Research (MSJAR), helping to define AI and Data Science within the company.

He spoke to us about the strides the company is making in finding new ways to use AI, where it’s lacking, and how it can help us recover faster should there be another pandemic.

Excerpts from the interview:

The application of AI towards solving our collective global problems is a noble pursuit. I'm sure your book AI For Goodwill inspire many others to do the same. Tell us what inspired you to work in this field?

Early in my career, I worked at the Inter-American Development Bank, where I was involved in providing data to measure the impact of projects in developing countries across areas like health, education, and sanitation. Seeing firsthand how technology could drive meaningful change in underserved communities deeply inspired me. This experience showed me the power of data and innovation to solve real-world challenges, and it became the foundation for my passion in leveraging technology — especially AI — to make a positive impact on the world.

Microsoft’s AI for Good Lab helps organisations tackle challenges related to healthcare, sustainability, climate change, and other humanitarian issues. How AI can be implemented to ensure better pandemic preparedness?

AI plays a crucial role in enhancing pandemic preparedness by improving early detection, response times, and resource allocation. For instance, AI can analyse vast amounts of real-time data from sources like health records, social media, and environmental factors to identify potential outbreaks before they spread. By leveraging predictive analytics, AI helps governments and health organisations anticipate healthcare needs, enabling timely interventions such as vaccine distribution, hospital resource planning, and public health communication. During the Covid-19 pandemic, Microsoft’s AI for Good Lab collaborated with global partners to develop solutions that strengthened these efforts, creating a more resilient and responsive healthcare system for future pandemics.

AI has come a long way in the last 10 years. Are there any natural disasters from the past where you feel the capabilities of big data today could have helped communities rebuild faster?

AI has advanced significantly in the last decade, and its capabilities today could have drastically improved both disaster preparedness and response in past events. Let me highlight two key areas where AI and big data have become game changers.

Rapid Disaster Assessment: Today, AI models that leverage high-resolution satellite data can provide near-real-time disaster assessments. These detailed maps, which once took weeks to compile, can now be available within hours. This allows on-the-ground response teams to assess damage, allocate resources efficiently, and target areas most in need of immediate help. The ability to quickly generate accurate maps of affected regions means that recovery efforts can be mobilised faster, minimising loss of life and speeding up the rebuilding process.

Disaster Preparedness and Risk Mapping: AI also plays a crucial role in disaster preparedness by addressing foundational questions about population distribution and vulnerability. Modern AI models can analyze environmental factors like flood zones or heatwave-prone areas to identify who is most at risk. This allows governments and humanitarian organizations to prioritize vulnerable communities, preemptively allocate resources, and design more effective response strategies. Understanding where people live and who is at risk has been a significant challenge in the past, but today’s AI systems can provide these insights with unprecedented accuracy.

By harnessing these capabilities, AI is not only improving how we respond to disasters but also transforming how we plan and prepare for them, ultimately making communities more resilient.

Are there any current or future areas where you feel we are collectively lacking in data collection or analysis that could help empower our present AI models?

One significant area where we are lacking in data is accessibility scenarios. Currently, 1.3 billion people worldwide live with disabilities, yet many AI systems struggle with accurately addressing their needs due to insufficient data. While advancements in large language models and multimodal AI are making strides in overcoming some of these gaps, many accessibility scenarios remain data-poor. It's critical for society to recognise the value of this data and invest in building more comprehensive datasets to ensure that AI can fully support accessibility needs. Empowering AI models with more diverse and representative data in this area is key to creating inclusive technologies that serve everyone.

Climate change will make record-breaking temperatures more common across the Middle East. Could you tell us about the technologies currently available at the AI for Good Lab that would help us deal with the challenges that come from this?

Heatwaves present a major challenge that often doesn’t receive the attention it deserves, despite their devastating effects. A study found that from 2000 to 2019, approximately half a million heat-related deaths occurred each year. In response to this growing threat, the AI for Good Lab has been leveraging AI and satellite imagery to address the issue. For example, in collaboration with SEEDS in India, we have been using high-resolution satellite data to analyse the materials used to construct homes, particularly focusing on the roofs of buildings. By identifying which structures are more vulnerable to overheating, we can better predict which families will be at higher risk during heatwaves. This information can then be used to guide targeted interventions, such as providing cooling resources or advising on heat-resistant building materials, ultimately helping to reduce the health risks posed by extreme temperatures.

The tools provided by the AI for Good Lab are essential in forecasting climate dangers. However, the infrastructure needed to run, maintain, and improve the computational resources of AI models have their own carbon footprint. Can you tell us how the AI for Good Lab is working towards reducing the environmental impact associated with the energy needs of AI?

While AI models are incredibly valuable in addressing global challenges, they are not exempt from contributing to emissions. According to the International Energy Agency (IEA), AI models collectively account for approximately 0.01 percent of global emissions today. Although this figure is relatively small, it is growing, and the potential impact could become significant as AI adoption expands.

At the AI for Good Lab, we are actively addressing this issue. The majority of our models are trained and deployed in Microsoft’s data centres, most of which are powered by 100 percent renewable energy and designed to be highly energy efficient. Additionally, Microsoft has set a bold commitment to become carbon negative by 2030, meaning that we will remove more carbon from the environment than we emit.

Big Data empowers the algorithms that underpin AI models. However, the collection of big data has been associated with an encroachment upon user privacy. Can you tell us how the AI for Good Lab maintains respect for user privacy whilst improving its algorithms, particularly in domains such as diagnostic healthcare?

At Microsoft, we believe that privacy is a fundamental human right, and we design our AI systems with privacy protection at the core. The AI for Good Lab ensures privacy through several key practices:

1. Data Anonymisation: We anonymise and de-identify data to protect individual identities.

2. Differential Privacy: We use techniques that allow us to gain insights from data without revealing personal information.

3. Data Minimisation and Governance: We only collect necessary data, adhering to strict global regulations like GDPR and HIPAA.

4. User Control and Transparency: We ensure transparency in how data is used and give users control over their information.

5. Secure Infrastructure: All data is processed in Microsoft’s secure, privacy-compliant infrastructure with encryption.

By embedding these privacy protections, we can advance AI responsibly without compromising user privacy.

Large language models have, at times, been reported to provide false responses. How does the AI for Good Lab safeguard against these instances, especially in crisis-related work with organisations such as The American Red Cross?

Large language models are incredibly powerful, but they are not always suitable for every scenario. This is particularly true when the risk of false information could have serious consequences. In our work with organisations like the American Red Cross, we prioritise accuracy and reliability, which is why we often rely on traditional AI models, such as computer vision, rather than large language models. For example, in crisis response efforts, we use AI to analyse satellite imagery and identify critical infrastructure or areas affected by disasters. This approach ensures that the insights we provide are based on proven, highly accurate models, minimising the risk of error.

Are there any initiatives currently being worked on between AI for Good and local institutions? Are there any local institutions that AI for Good would like to work with?

Collaboration with local institutions is at the heart of much of the work we do in the AI for Good Lab. Partnering with organisations on the ground is critical to ensuring that our solutions are co-created with the communities they’re meant to serve and have a meaningful and sustainable impact.

One example is our partnership with the Kenya Wildlife Trust and the Smithsonian Institution, where we’re collaborating to develop a data-driven solution to resource competition between pastoralists and wildlife in Kenya’s Maasai Mara.

Thank you for your time, Juan. One last question; how can social entrepreneurs and non-profits here in the UAE get in touch with the AI for Good Lab?

We recently announced with our partners at G42 that we would soon be expanding our AI for Good Research Lab to Abu Dhabi. We expect this will open a lot of opportunities for social entrepreneurs and non-profits in the UAE to engage with us. We will have more on that soon.

ALSO READ:



Next Story