What Platforms Should do about Misogyny and Harassment

in Mediating Misogyny: Gender, Technology, and Harassment (Jacqueline Vickery and Tracy Everbach, eds)

Published by Palgrave Press | 2018

It feels a bit futile to hypothesize about what social media platforms should do about misogyny and harassment, or what more they should do. Facebook, Twitter, and the like have each developed a massive socio-technical apparatus for content moderation, massive logistical arrangements that involve employees setting rules at the platform, contract workers doing piecemeal policing all over the globe, users enlisted to flag violations, and software systems trained to identify violations automatically. Lashed together, these Rube Goldberg moderation machines chug along, turning complaints into data into decisions. Chug, chug, chug. Some instances of harassment get identified and addressed, others are overlooked or misapprehended. Lots of conflict on these platforms pass right through this system, and plenty never finds its way into it. A one percent error rate is considered gold standard – meaning even a near perfect system fails millions of users. A user being harassed can either submit to this impersonal megasystem and hope for some relief to pop out the other end, or skip it and fend for themselves.