Diana Award Youth Board meeting with Facebook

Diana Award Youth Board meeting with Facebook

On 30th July, The Diana Award took five of its Youth Board members, Elsa, Jude, Leo, Louisa and Rohin, to Facebook’s London offices to chat to the social media company’s UK Policy Manager, Richard Earley. Over two hours, these five teens set out to understand more about online hate speech: how Facebook defines it, where it draws the line between controversial opinion and hate speech, and how it moderates hate speech across the hugely varied geographies where it operates.

The first part of the roundtable was a Q&A, where Youth Board Members asked Richard a series of questions and he responded. The second part took the form of an open conversation, where Elsa, Jude, Leo, Louisa and Rohin shared their own views about online hate speech and how it might be addressed on a platform like Facebook.

Part 1: Q&A between Youth Board Members and Richard

How does Facebook define “hate speech”?

Richard began by explaining Facebook’s definition of hate speech, which can be found in its Terms & Conditions:

“Any attack on a person based on their belonging to one of the protected characteristics… things like race, gender, sexual orientation, gender identity, national origin…”

By “attack”, Facebook means:

“Any statement which is violent or dehumanizing, which calls for exclusion, or which is a statement of inferiority.”

How does Facebook moderate hate speech?

Richard explained that any user who sees content on Facebook that they suspect is hate speech can report it by clicking on the arrow or the three buttons next to it. Once content is reported, it is reviewed by members of Facebook’s Community Operations team, who are based in offices across the world. This process takes place 24 hours a day, 7 days a week. Moderators look at content and judge whether it breaks Facebook’s rules around hate speech. If they think it does break those rules, the content is removed. Due the huge number of reports received (Facebook has around 2 billion active users each month), reports are triaged, so that those which appear to carry the greatest risk of real-world harm (such as risk of suicide or self-harm), are seen to quicker.

But Facebook is also increasing its use of Artificial Intelligence to identify suspected cases of hate speech before they’re reported by users, and send those cases to its Community Operations team for review. The percentage of hate speech that Facebook was able to remove from its platform using Artificial Intelligence before someone reported it went up from about 25% in the first 3 months of 2018 to about 65% in the first 3 months of 2019.

Identifying hate speech through Artificial Intelligence is challenging. Hate speech can take diverse forms (text, slang, images), language and its meaning changes rapidly, and context matters: words that can seem harmless in one country or community can be hateful in others. This is why, Richard explained, Facebook drafts its guidelines in partnership with experts on the subject who also have in-depth country knowledge, but it’s also why they have to constantly adapt their systems and be sensitive and responsive to the feedback they receive.

And whether hate speech is moderated by tech or human beings, there will always be the challenge that Facebook operates with global standards, whereas views about what is acceptable as well as the laws around illegal hate speech can vary considerably across different parts of the world. Richard acknowledged this challenge, but explained that he believes that in terms of illegal hate speech, Facebook’s standards (because of the consultation-based nature of their design) are above those found in many countries’ laws.

He also described Facebook’s plans to set up an independent ‘Oversight Board’ that will review Facebook’s decisions on which content it has decided to remove and which it has left on the platform, and to issue “binding rulings” onto Facebook – with the idea being that this will hold Facebook to account for making decisions in line with its Terms & Conditions.

What happens to users who keep breaking the rules?

If something you’ve posted onto Facebook is removed for breaking Facebook’s rules, you’ll be notified through a pop up. If someone continues to break the rules they can be subjected to temporary bans, stopped from using specific features on Facebook – and in the most serious cases, they can be permanently banned. Richard described this as a way of giving users (especially those who may have broken the rules by accident) a chance to change their behaviour. However, he explained that when users break Facebook’s rules around “really serious, real-world harms such as child exploitation or terrorism” – users are permanently banned right away.

What’s the scale of the online hate speech problem on Facebook?

In the first 3 months of 2019, Facebook removed 4 million pieces of content from the platform for breaking its rules around hate speech. However, Richard says that number doesn’t give us enough of an understanding of the scale of the problem. Facebook would like to find out what percentage of the content that people see on the platform is hate speech: that’s something they report on for other forms of offensive content and are working towards providing with regard to hate speech.

Part 2: Open discussion between Youth Board Members and Richard

How does Facebook draw the line between controversial opinion and hate speech?

Richard described this as an ongoing challenge for Facebook, and talked about the ways in which the company has developed global partnerships with experts – charities, non-governmental organisations and academics - who advise them on what should and shouldn’t be allowed to appear on the platform. He noted the distinction Facebook makes between criticism related to issues and institutions on the one hand, and the attack of individuals who may identify with those issues or institutions, on the other:

“One example of where we’ve drawn a line is that we allow people on Facebook to engage in debate and discussion around issues like politics and religion – so we allow people to criticise political parties or religious institutions - but we draw the line at people attacking individuals that are members of those religions or those political parties.”

Richard, Facebook Policy team

Jude questioned whether it should, in fact, be left to a private company like Facebook to draw the line between opinion and hate speech:

“Should Facebook draw the line?... You’re just a company at the end of the day… do you get to decide what’s right and what’s wrong?”

Jude, Youth Board member

At which point Leo jumped in to say that yes, it should be up to companies like Facebook to make sure that this kind of harmful content isn’t appearing “… because it is their platform”.

Rohin challenged this argument, drawing attention to the significant role that Facebook plays in people’s lives, and asking whether this calls for government intervention to complement Facebook’s internal policy initiatives:

“How much can government intervention play a part in this business and should it, I think that raises a really great question because obviously you own the platform… it’s a business, it makes money… but at the same time it’s such a huge part of our youth’s lives all over the world – should we give government a mandate to put in things that can protect us?”

Rohin, Youth Board member

What, if anything, is unique about online hate speech, as compared to hate speech that occurs in offline environments?

Jude talked about the ways that the online world may make hate speech worse:

“I think hate speech has probably been around in some form or another forever – but I think the internet magnifies it enormously… because you’re shouting into an open space where there’s no one else really there… it’s just, you know, profile pictures who aren’t real people to you… you feel like you could get away with saying more… so… I think it makes it more extreme”

Jude, Youth Board member

Rohin chimed in to describe the “chain effect” that’s created due to the larger audiences online.

At this point, Leo, whilst expressing general agreement, cautioned that there’s a need to remember the ways that online and offline hate are interlinked:

“I agree with you… but… it’s important to acknowledge the fact that this isn’t just a virtual issue… it’s something that needs to be taken into consideration in all aspects of life, because it happens everywhere”

Leo, Youth Board member

Richard drew attention to Facebook’s responsibility to also address online hate in cases where these interlinkages occur:

“Hate organisations… groups that in the real world go around trying to spread hateful messages… so we work really hard to try to identify those groups and where we do, we don’t allow them on our platform”

Richard, Facebook Policy team

What can users do when they see hate speech on Facebook? Is it only the victim of hate speech that should report?

Richard emphasized that everyone should report content that they think might be hate speech – even if they aren’t sure it is. Reported content won’t be automatically taken down – it will be reviewed. All reports are anonymous so those reporting don’t need to worry about facing any backlash themselves – a fear which Rohin said he thought prevented people from reporting content. Anyone who has reported a piece of content will be updated about the case through their private ‘support inbox’, which is separate from their regular messaging inbox.

Richard explained that in most cases, it doesn’t matter who does the reporting – whether it’s the victim of hate speech or someone who just witnessed it – the case will be treated equally. In a few cases, such as bullying, the person making the report is factored into decisions because if it’s the victim who has reported then that makes it clearer that the content was inappropriate.

The discussion then turned to the level of youth awareness around Facebook’s guidelines and procedures.

“Making people really aware of what your standards and guidelines are is really important because, I know people can vaguely apply their rules from the real world onto Facebook but… it is different… and I think Facebook could be doing more to tell people what they consider to be acceptable and unacceptable”

Jude, Youth Board Member

Richard pointed out that Facebook made their guidelines public years ago to promote transparency. Whenever you report something on Facebook, or if something you have posted is removed, you’ll be updated about your case through your ‘support inbox’, and the update will also include a link to Facebook’s guidelines. That being said, Richard acknowledged that there’s more Facebook could be doing to make their guidelines more widely accessible.

Leo suggested that Facebook should consider using advertising space to communicate bits of their guidelines as snapshots or reminders. And Louisa supported this idea, adding that it could take the form of a “monthly check-up”, where users are updated on what Facebook’s standards are and how users can report content they find unacceptable.

“The ads you want to see are the ads that will be helpful to you”

Louisa, Youth Board Member

This roundtable was part of our efforts to increase youth awareness about online hate speech and some of the key policy and industry discussions and decision-making processes taking place to address it.

We’re grateful to Youth Board Members Elsa, Jude, Leo, Louisa and Rohin for their enthusiastic and thoughtful questions and insights – and to Richard and Facebook for the time they took to be with us.

In October, two of the Youth Board Members who sat on the roundtable will be chairing a panel at the final SELMA conference. Join us!

Watch recordings of the roundtable on YouTube.

Back to News

Related Articles