Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries
- Alexandra Olteanu ,
- Carlos Castillo ,
- Fernando Diaz ,
- Emre Kiciman
Social data in digital form, which includes user-generated content, expressed or implicit relationships between people, and behavioral traces, are at the core of many popular applications and platforms, and drive the research agenda of many researchers. The promises of social data are many, including understanding “what the world thinks” about a social issue, brand, product, celebrity, or other entity, as well as enabling better decision making in a variety of fields including public policy, healthcare, and economics. Many academics and practitioners have warned against the naïve usage of social data. There are biases and inaccuracies at the source of the data, but also introduced during processing. There are methodological limitations and pitfalls, as well as ethical boundaries and unexpected consequences that are often overlooked. This survey recognizes that the rigor with which these issues are addressed by different researchers varies across a wide range. We present a framework for identifying a broad range of menaces in the research and practices around social data.
Failures of imagination: Discovering and measuring harms in language technologies
Auditing natural language processing (NLP) systems for computational harms remains an elusive goal. Doing so, however, is critical as there is a proliferation of language technologies (and applications) that are enabled by increasingly powerful natural language generation and representation models. Computational harms occur not only due to what content is being produced by people, but also due to how content is being embedded, represented, and generated by large-scale and sophisticated language models. This webinar will cover challenges with locating and measuring potential harms that language technologies—and the data they ingest or generate—might surface, exacerbate, or cause. Such harms can range from more overt issues, like surfacing offensive speech or reinforcing stereotypes, to more subtle issues, like nudging users toward undesirable patterns of behavior or triggering…