Facebook does want new Internet rules - opinion

Contrary to claims that have been made about Facebook recently, we’ve always had the commercial incentive to remove harmful content from our sites.

 FACEBOOK HAS almost halved the amount of hate speech people see on the social networking site over the last three quarters, down to 0.05% of content views, according to the author. (photo credit: DADO RUVIC/REUTERS)
FACEBOOK HAS almost halved the amount of hate speech people see on the social networking site over the last three quarters, down to 0.05% of content views, according to the author.
(photo credit: DADO RUVIC/REUTERS)

The Internet has transformed the world over the last two decades, but it has also introduced new challenges, and legislation has not kept up. In the coming weeks, the Justice Ministry will discuss new rules for harmful content online with both tech companies and some of their staunchest critics. While there will no doubt be differing views, we should all agree on one thing: the tech industry needs regulation.

At Facebook, we’ve advocated for democratic governments to set new rules for the Internet on areas like harmful content, privacy, data and elections, because we believe that businesses like ours should not be making these decisions on our own.

We welcome the committee formed by the Justice Ministry, which aims to look into the same issues – adjusting the legal framework to fast-developing technologies. While we might not agree with all the details, we are pleased that this process has begun here in Israel. As Israel works to maintain its reputation as the Start-Up Nation, we hope the regulations formed here will set clear rules while leaving room for innovation so that Israel stays at the forefront of developing and creating technology for the future. 

Over the past five years I have been a part of many policy developments at Facebook, including our policies banning Holocaust denial and the hateful stereotype of Jews running the world. These policies and others seek to protect people from harm while also protecting freedom of expression.

Our team includes former prosecutors, law enforcement officers, counter-terrorism specialists, teachers and child safety advocates, and we work with hundreds of independent experts around the world to help us get the balance right. While people often disagree about exactly where to draw the line, government regulation can establish standards all companies should meet.

 A 3D printed Facebook logo is pictured on a keyboard in front of binary code in this illustration taken September 24, 2021. (credit: REUTERS/DADO RUVIC/ILLUSTRATION)
A 3D printed Facebook logo is pictured on a keyboard in front of binary code in this illustration taken September 24, 2021. (credit: REUTERS/DADO RUVIC/ILLUSTRATION)

Companies should also be judged on how their rules are enforced. Three years ago, we began publishing figures on how we deal with harmful content, including the amount of it people actually see and how much we take down. Just like financial results that we are held accountable for, these reports are now published every quarter and we’re subjecting them to independent audit. 

Contrary to claims that have been made about Facebook recently, we’ve always had the commercial incentive to remove harmful content from our sites. People don’t want to see it when they use our apps and advertisers don’t want their ads next to it. That’s why we’ve invested $13 billion on safety and security since 2016 and have more than 40,000 people working in this area.

As a result, we’ve almost halved the amount of hate speech people see on Facebook over the last three quarters – down to 0.05% of content views, or around five views per every 10,000.

We’ve also got better at detecting it. Of the hate speech we removed, we found 97% before anyone reported it to us – up from just 23% a few years ago. While we have further to go, the enforcement reports show that we are making progress.

We also make progress by conducting research, including taking part in more than 400 peer-reviewed studies in the past year. This helps us to make our apps better for the people who use them. Contrary to recent claims, our research doesn’t conclude that Instagram is inherently bad for teenagers. While some teens told us Instagram made them feel worse when they were struggling with issues like loneliness, anxiety and sadness, more teens told us that Instagram made them feel better when experiencing these same issues. But if even one young person feels worse, that’s one too many, so we use our research to understand bad experiences and prevent them.

For example, we now make new accounts private or “friends only” by default for under 18s and direct anyone searching about topics like eating disorders to local organizations who can offer support. We’ve also introduced new tools to prevent abusive messages from reaching people, and are building new features like “take a break,” which will encourage people to limit the time they spend on our apps.

We know there is more to do and we’ll keep making improvements. With new rules for the whole industry we can make the Internet safer while keeping the vast social and economic benefits it brings.

The writer is public policy director for Israel and the Jewish Diaspora at Facebook.