Webinar on “Moderating the Global Social Media Information Ecosystem (Session 1)”

In this webinar, Mr Jean-Jacques Sahel from Google, Ms Monrawee Ampolpittayanant (Lynn) from Twitter and Ms Melissa Chin from Meta explore the issue of content moderation on social media platforms and how policies are formulated and implemented to manage sensitive and controversial topics while balancing different moral, ethical, and cultural standards.

MEDIA, TECHNOLOGY AND SOCIETY PROGRAMME WEBINAR SERIES
Balancing the Free Speech Tightrope: Moderating Social Media in Southeast Asia

Tuesday, 29 March 2022 — The ISEAS – Yusof Ishak Institute held a webinar titled “Moderating the Global Social Media Information Ecosystem”, which featured presentations by industry experts from Google, Twitter, and Meta and moderated by Dr Pauline Leong (Visiting Fellow, ISEAS – Yusof Ishak Institute)

Moderator Dr Pauline Leong interacted with speakers Mr Jean-Jacques Sahel (Google), Ms Monrawee Ampolpittayanant (Lynn – Twitter) and Ms Melissa Chin (Meta) during the webinar. Over 180 participants attended the webinar. (Credit: ISEAS – Yusof Ishak Institute)
Mr Sahel introduced his presentation titled “Approaching information quality and content moderation”. (Credit: ISEAS – Yusof Ishak Institute)

Mr Jean-Jacques Sahel, Google’s Head of Information Policy, Asia-Pacific, began with his presentation titled: “Approaching information quality and content moderation”. In his discussion, Mr Sahel described Google’s approach to maintaining information quality on its platforms like Google and YouTube. Mr Sahel then went into detail to describe some of the practical initiatives and emerging multi-stakeholder solutions emerging as successful experiences and models in moderating content on their platforms. He touched upon the appropriate public policy framework that can build upon these emerging initiatives to effectively mediate the problem.

Mr Sahel shared Google’s four levers to support information quality and moderate content, which are remove, raise, reduce, and reward. Firstly, content that violates laws and policies are removed. Google has in place systems that are trained to determine whether a source is authoritative and reliable, thus raising standards of high-quality and trustworthy information. Google also limits the spread of potentially harmful information by reducing the recommendations of borderline content or videos that could misinform or harm users. Lastly, it monetarily rewards advertisers, publishers and creators who deliver high standards of quality and reliable content.

Mr Sahel also discussed how Google uses a mix of user feedback, machine learning and human reviewers to enforce its content moderation policies, the details of which are available on Google’s transparency website (https://transparencyreport.google.com/?hl=en). He stated Google’s commitment to human rights standards in accordance with the United Nations Universal Declaration of Human Rights, as well as those established in the United Nations Guiding Principles on Business and Human Rights (UNGPs) and the Global Network Initiative Principles (GNI Principles).

Lastly, Mr Sahel shared how Google supports digital and media literacy programmes to empower users to be more informed so that they can think and decide for themselves the kind of content they want to consume. Mr Sahel concludes by emphasising a holistic public policy approach that emphasises comprehensive and evidence-based public policy strategies for content moderation. In his opinion, such efforts require integration of efforts and cooperation from multiple stakeholders at national and international levels, in addition to technical solutions.

Ms Melissa Chin’s presentation covered three different aspects – community standards, policies and enforcement. (Credit: ISEAS – Yusof Ishak Institute)

Firstly, Ms Chin who is a leader in Meta’s Asia-Pacific Content Policy team, highlighted its community standards which are accessible publicly online. She outlined the process on how they are developed and refined, which are based on the values of authenticity, safety, privacy, and dignity. While Meta recognizes the role of its platforms in allowing users to freely express themselves, it also notes that harmful and objectionable content should not be allowed, such as violence, hate speech, cyber-bullying and harassment, among many others. These community standards are constantly evolving, and Meta has notable experts whom they consult, such as experts on hate speech and human rights as well as legal advisors.

Meta’s enforcement of its community standards similarly relies on user reports, artificial intelligence technology, and human reviewers to filter possible harmful content. All content on Facebook and Instagram can be reported, and a post will be taken down if it violates the community standards. Ms Chin shared the Community Standards Enforcement Report, which is available on its website and part of Meta’s commitment to transparency for its users. The quarterly report contains information on Meta’s performance in enforcing its policies on Facebook and Instagram. Lastly, Ms Chin highlighted the Oversight Board, which is an independent body of experts and civic leaders with diverse professional backgrounds, cultures, opinions, and beliefs. Users who disagree with decisions made by Facebook and Instagram can appeal to the Board, which consists of members who are separate from Meta.

Ms Lynn Ampolpittayanant shared the agenda for her presentation. (Credit: ISEAS – Yusof Ishak Institute)

To complete the panel presentation, Ms Lynn Ampolpittayanant, Twitter’s Head of Public Policy, Government and Philanthropy, Southeast Asia, discussed the tech giant’s approach to moderating content across the region. First, she gave an overview of the various social media regulatory issues across South-East Asia from election misinformation in the Philippines to lèse-majesté and other censorship mechanisms in Thailand. Ms Ampolpittayanant also explained the concept of Open Internet where there should be equal access to all websites, content, and applications without deliberate interference. In Twitter’s paper on “Protecting the Open Internet – Regulatory Principles for Policy Makers”, she highlighted several guiding principles such as protection of human rights, trust, choice and control over algorithms, and unhindered innovation, noting that content moderation is complex. Ms Ampolpittayanant shared that Twitter’s misinformation policies are currently prioritised based on the highest potential of harm, and that they are currently focused on the COVID-19 pandemic, followed by civic integrity and lastly, synthetic and manipulated media.

Ms Ampolpittayanant also stated about the different methods employed by Twitter to combat misinformation, such as Birdwatch which is a pilot community-driven approach that allows people to identify information in tweets that they believe are misleading and enables them to write notes that provide context. Twitter similarly uses a combination of technology and specially trained human reviewers to respond to reports 25/7 in multiple languages. In the Philippines, it has worked with the Commission on Elections to launch customised emojis for election-related discussions, added search prompts to facilitate access to credible election information and activated civic integrity labels and warnings on misleading tweets. Twitter is also redesigning misinformation labels to give more context to users in an effort to improve their understanding on why a tweet may be misleading.

The question-and-answer segment saw questions on how the companies deal with alternative and fringe views,  as well as bots and fake accounts.