In recent times, the explosion of information from a variety of sources and cutting edge techniques such as Deepfake have made it increasingly important to check the credibility and reliability of the data. Large volumes of data generated from diverse information channels like social media, online news outlets, and crowd-sourcing contribute valuable knowledge; however, this comes with additional challenges to ascertain the credibility of user-generated and machine-generated information.
- Given diverse information about an object (e.g., a natural language claim text, an entity, structured triples and social network context) from heterogeneous and multi-modal sources, how do we identify high quality and trustworthy information and information sources?
- How can we generate human-interpretable explanations for the models’ verdict How can we design robust fake information (e.g., reviews and news) detection mechanisms to withstand adversarial generation strategies, as spammers and content generators are co-evolving with the advanced detectors?
To answer these questions, this project focuses on big ideas – for resolving conflicts, fact-checking, ascertaining credibility of claims, explaining predictions from deep fake detectors, developing robust adversarial mechanisms for fake content detection, manipulation and safeguards, and making detection algorithms fair and unbiased to the involved participants – in heterogeneous and multi-modal sources of information including texts, images, videos, relational data, social networks and knowledge graphs.