You are here: СÀ¶ÊÓƵ School of International Service SIS Research Samantha Bradshaw, An investigation of social media labeling decisions preceding the 2020 U.S. election

Research

Samantha Bradshaw, An investigation of social media labeling decisions preceding the 2020 U.S. election

Samantha BradshawSince it is difficult to determine whether social media content moderators have assessed particular content, it is hard to evaluate the consistency of their decisions within platforms. To investigate this question, SIS Professor Samantha Brashaw and her Stanford co-authors Shelby Grossman and Miles McCain study a dataset of 1,035 posts on Facebook and Twitter that made 78 misleading claims related to the U.S. 2020 presidential election. These posts were identified by the Election Integrity Partnership, a coalition of civil society groups, and sent to the relevant platforms, where employees confirmed receipt.

The platforms labeled some (but not all) of these posts as misleading. For 69% of the misleading claims, Facebook consistently labeled each post that included one of those claims—either always or never adding a label. It inconsistently labeled the remaining 31% of misleading claims. The findings for Twitter are nearly identical: 70% of the claims were labeled consistently, and 30% inconsistently. Bradshaw and her co-authors investigated these inconsistencies and found that based on publicly available information, most of the platforms’ decisions were arbitrary. However, in about a third of the cases they found plausible reasons that could explain the inconsistent labeling, although these reasons may not be aligned with the platforms’ stated policies. The strongest finding is that Twitter was more likely to label posts from verified users, and less likely to label identical content from non-verified users.

This study demonstrates how academic–industry collaborations can provide insights into typically opaque content moderation practices.

For more, read the full article .