Content Moderation Catch 22: Walking the Regulatory Tightrope
Viewpoint

9 Dec 2022
by Manu Aggarwal, Abhijnan Dasgupta, Darshita Lohiya, Aseem Nousher

Social platforms are dealing with an explosion of digital content, which is generated in multiple formats and languages and over several channels. This content needs to be moderated to ensure the safety of users, establish the liability of harmful content, protect user data privacy, ensure workforce wellbeing, and safeguard children from online misconduct. In recent years, international and local regulators have enacted many laws to this end.

Enterprises and providers need to stay updated with these laws and proactively orchestrate policies and standards to ensure conformity with local laws for content moderation and data privacy, among others. Moreover, as the demand for content moderation increases due to increasing localization, proliferation, and consumption of content, organizations need to ensure they source the right talent for content moderation and conform to local and international labor well-being laws.

In this research, we recommend that enterprises should proactively track the evolving regulatory landscape, collectively brainstorm best practices, leverage technologies such as Artificial Intelligence (AI) and Natural Language Processing (NLP), and explore low-cost delivery models to overcome the challenges of the content moderation regulatory landscape.

Scope

Industry: trust and safety

Geography: global

Contents

In this report, we:

  • Examine the content moderation regulatory landscape
  • Describe the challenges for enterprises in a strict regulatory landscape and best practices for risk mitigation
  • Provide a brief overview of the metaverse and its implications on the content moderation regulatory landscape

Membership(s)

Trust and Safety

Sourcing and Vendor Management

 

Page Count: 19