Meta’s Oversight Board is tackling a case focused on Meta’s ability to permanently disable user accounts. Permanent bans are a drastic action, locking people out of their profiles, memories, friend connections, and, in the case of creators and businesses, their ability to market and communicate with fans and customers.
This is the first time in the organization’s five-year history as a policy advisor that permanent account bans have been a subject of the Oversight Board’s focus, the organization notes.
The case being reviewed isn’t exactly one of an everyday user. Instead, the case involves a high-profile Instagram user who repeatedly violated Meta’s Community Standards by posting visual threats of violence against a female journalist, anti-gay slurs against politicians, content depicting a sex act, allegations of misconduct against minorities, and more. The account had not accumulated enough strikes to be automatically disabled, but Meta made the decision to permanently ban the account.
The Board’s materials didn’t name the account in question, but its recommendations could impact others who post content that targets public figures with abuse, harassment, and threats, as well as users who have their account permanently banned without receiving transparent explanations.
Meta referred this specific case to the Board, which included five posts made in the year before the account was permanently disabled. The tech giant says it’s looking for input about several key issues: how permanent bans can be processed fairly, the effectiveness of its current tools to protect public figures and journalists from repeated abuse and threats of violence, the challenges of identifying off-platform content, whether punitive measures effectively shape online behaviors, and best practices for transparent reporting on account enforcement decisions.
The decision to review the particulars of the case comes after a year in which users have complained of mass bans with little information about what they did wrong. The issue has impacted Facebook Groups, as well as individual account holders who believe that automated moderation tools are to blame. In addition, those who have been banned have complained that Meta’s paid support offering, Meta Verified, has proven useless to aid them in these situations.
Whether the Oversight Board has any real sway to address issues on Meta’s platform continues to be debated, of course.
The board has a limited scope to enact change at the social networking giant, meaning it can’t force Meta to make broader policy changes or address systemic issues. Notably, the Board isn’t consulted when CEO Mark Zuckerberg decides to make sweeping changes to the company’s policies — like its decision last year to relax hate speech restrictions. The Board can make recommendations and can overturn specific content moderation decisions, but it can often be slow to render a decision. It also takes on relatively few cases compared to the millions of moderation decisions that Meta makes across its user base.
According to a report released in December, Meta has implemented 75% of more than 300 recommendations the Board has issued, and its content moderation decisions have been consistently followed by Meta. Meta also recently asked for the policy advisors’ opinion on its implementation of the crowdsourced fact-checking feature, Community Notes.
After the Oversight Board issues its policy recommendations to Meta, the company has 60 days to respond. The Board is also soliciting public comments on this topic, but these cannot be anonymous.