Background:
The emerging general-purpose AI models (e.g., LLMs) have shown potential to enhance productivity, creative expression, and scientific research with their capabilities that are close to humans.
Challenge:
As Brad Smith noted, “The more powerful the tool, the greater the benefit or damage it can cause.” Despite the benefits, their significant technical and social challenges, such as the requirements of new research paradigm, the emergence of unforeseeable risks, the fair and inclusive usage of AI technologies, andthe needs of new regulatory framework, should be carefully addressed.
Mission:
The mission of Societal AI is to bridge the gap by considering AI not just as a technical tool but as a technology that requires careful socio-technical integration. This research initiative aims to develop new paradigms for evaluating AI capabilities while addressing the regulatory, ethical, and accessibility challenges that come with AI’s growing influence in society. To achieve the goal, we emphasize an interdisciplinary effort with social scientists to responsibly manage AI’s challenges and risks.
Specifically, we have devoted ourselves to cutting-edge research on:
- Innovating the paradigms to evaluate AI’s capability and performance in new, unforeseen tasks and environments to enable more comprehensive understanding
- Aligning AI with diverse human values and ethical principles to respect and reflect a broad spectrum of human values, ensuring ethical considerations are integrated throughout the development process.
- Developing robust frameworks to ensure the safety, reliability, and controllability of the increasingly autonomous AI models
We are also actively exploring more Societal AI research directions, including human-AI collaboration, AI interpretability, AI’s societal impact, and personalized AI.