![]() ![]() We study content moderation from the perspective of a bystander, because the technical progress made toward automated content moderation may be meaningless if the user community questions the justice and fairness of moderation. Besides academic studies, scrutiny of algorithmic decision-making has also entered the public agenda through documentaries such as Coded Bias ( Kantayya, 2020). For instance, a study of Wikipedia ( Halfaker et al., 2012) concludes that algorithmic moderation tools are one of the likely causes for a reduction in participation on the platform, and research on Reddit shows that moderators fall short when implementing transparency and accountability principles ( Juneja et al., 2020). However, qualitative research ( Myers West, 2018 Suzor et al., 2019) shows that even if moderation algorithms are developed, their implementation may be compromised by the way content removal decisions are communicated. Djuric et al., 2015 Zhang et al., 2018) and profanity ( Gillespie, 2018). In addition, sending users to community guidelines for further information on content deletion has negative effects on outcome fairness and trust.Ĭonsiderable progress has been achieved in developing automated ways of detecting hateful content (e.g. Contrary to expectations, our findings suggest that algorithmic moderation is perceived as more transparent than human, especially when no explanation is given for content removal. Our preregistered study encompasses representative samples ( N = 2870) from the United States, the Netherlands, and Portugal. ![]() ![]() We experimentally test the extent to which the type of content being removed (profanity vs hate speech) and the explanation given for its removal (no explanation vs link to community guidelines vs specific explanation) influence user perceptions of human and algorithmic moderators. This has led high-profile content platforms, such as Facebook, to adopt algorithmic content-moderation systems however, the impact of algorithmic moderation on user perceptions is unclear. Hateful content online is a concern for social media platforms, policymakers, and the public. All subjects Allied Health Cardiology & Cardiovascular Medicine Dentistry Emergency Medicine & Critical Care Endocrinology & Metabolism Environmental Science General Medicine Geriatrics Infectious Diseases Medico-legal Neurology Nursing Nutrition Obstetrics & Gynecology Oncology Orthopaedics & Sports Medicine Otolaryngology Palliative Medicine & Chronic Care Pediatrics Pharmacology & Toxicology Psychiatry & Psychology Public Health Pulmonary & Respiratory Medicine Radiology Research Methods & Evaluation Rheumatology Surgery Tropical Medicine Veterinary Medicine Cell Biology Clinical Biochemistry Environmental Science Life Sciences Neuroscience Pharmacology & Toxicology Biomedical Engineering Engineering & Computing Environmental Engineering Materials Science Anthropology & Archaeology Communication & Media Studies Criminology & Criminal Justice Cultural Studies Economics & Development Education Environmental Studies Ethnic Studies Family Studies Gender Studies Geography Gerontology & Aging Group Studies History Information Science Interpersonal Violence Language & Linguistics Law Management & Organization Studies Marketing & Hospitality Music Peace Studies & Conflict Resolution Philosophy Politics & International Relations Psychoanalysis Psychology & Counseling Public Administration Regional Studies Religion Research Methods & Evaluation Science & Society Studies Social Work & Social Policy Sociology Special Education Urban Studies & Planning BROWSE JOURNALS ![]()
0 Comments
Leave a Reply. |