Mosaic Faces

Project MOSAIC

A Generative AI experience designed to capture and dynamically display public discourse around AI as Art.

Vision

AI is evolving incredibly fast and its impact on society promises to be as great as its one-hundred-year predecessor, the Industrial Revolution. As AI rapidly evolves and seeps into daily life, so does our experience with it. People’s perceptions of AI, and how they see it changing their lives today, is a critical area of inquiry for researchers. Our challenge is that traditional methods of surveys are often static, less inclusive, and ill-suited to accurately capture the breadth and depth of the AI event.

To address this issue, we introduce Project Mosaic – a Generative AI experience designed to capture and dynamically display public discourse around AI. It implicitly acts as an active visual barometer while also sampled with other input metrics (economic, sociopolitical) can serve to infer the collective prediction helping to measure and shape societal impact of AI.

Experience

Mosaic leverages the speed and creativity of Generative AI to elevate and highlight narratives around public sentiment while promoting a more inclusive experience. It encourages the public to engage with it by answering survey questions and seeing their response reflected as responsive art. The framework acts as a living survey model that poses questions over time creating new data verticals and generative experiences for ongoing societal engagement and research. It’s novel approach to visualize an individual’s response using AI and tell a visual story via the interactive Mosaic experience that we think will create a unique public display of interactive Art.

Technical approach

The technical approach involves a modular architecture that leverages Azure cloud services for scalability and reliability. The system consists of a responsive web-based front-end user experience, a serverless back-end API implementing an AI orchestrator, and a scalable data storage layer. At its core, the orchestrator is responsible for coordinating and executing multiple AI services performing sentiment analysis and art generation.

To support research experimentation the solution is extensible in the three areas of visualization, AI models and data storage. Visualizations are driven by a data-API offering scalable content delivery (CDN) of large volumes of artwork and metadata artifacts for interactive (near-)real-time rendering and exploration. The orchestrator leverages a plug and play mechanism (e.g., Semantic Kernel) coordinating multiple multi-modal sentiment extraction as well as image generation agents. The storage layer supports easy addition of new data points without requiring significant changes to the underlying information architecture through document-oriented storage paired with Azure storage.

Public beta: Coming soon (Dec 2023)