Facebook says it is “horrified” at the continued online abuse of footballers and has announced what is says are tougher measures to tackle the issue.
The social media platform is changing the rules governing direct messaging on Instagram, a platform it also owns.
It will disable the accounts of those found to have repeatedly sent abusive private messages on Instagram.
UK head of content policy Fadzai Madzingira said it was “saddening” to see continued abuse on the platforms.
Madzingira told BBC Sport: “I’m horrified at the type of abuse that people, especially these footballers, have to deal with on the basis of who they are, whether it’s their race or their religion or their gender, and as a company, we’re disappointed to see that sort of behaviour that plays out offline also playing out on our platform.
“It’s why we’re making the announcements today about taking tougher measures to go after accounts that are violating our community standards and our goals within Instagram direct messages.”
A number of Premier League footballers – including Manchester United’s Marcus Rashford and Axel Tuanzebe, West Brom’s Romaine , Chelsea full-back Reece James and his sister, Manchester United forward Lauren James – have been subjected to abusive online messages in recent weeks.
The Football Association has called for action from the government, which has since stated social media companies could face “large fines” potentially amounting to “billions of pounds” if they fail to tackle abuse on their platforms.
Asked by BBC sports editor Dan Roan whether Facebook was enabling such abuse, Madzingira said: “No, to imply that it’s about enabling I think would be inaccurate.
“I think that platforms like ours allow communities of people to connect on the things that they love. If we need to have a conversation about hate, it really needs to be not what is just happening on the platform.”
What will change on Instagram?
Footballers past and present have called for users of social media platforms to be required to provide verification documents when creating accounts so that they can be traced more effectively if they breach rules.
Facebook says this measure would prove challenging in communities where such documents would not be readily available.
“If we were to insist on using government ID or passport details we would be barring access to the very people who use our platforms to build communities, so we are very conscious we allow for that access,” added Madzingira.
The company claims it “took action” on 6.5 million pieces of hate speech on Instagram between July and September last year, including within direct messages which are harder to police because of privacy rules.
“To date, if someone violated the rules in Instagram direct messages, we would set a specific ban or a block for a certain amount of time and extend that period, should they continue to violate,” Madzingira explained.
“Today we’re announcing that we will now be removing those accounts, should they continue to violate within Instagram direct messaging.”
Facebook said it would not spell out how many offences would trigger removal as offenders could use the information to “game the system”.
‘We are a small part in fixing this’
Facebook said it was “doing everything we can to fight hate and racism on our platform” but added the “problems are bigger than us”.
Some users have sought a ban on specific emojis commonly used in racist messages but Madzingira argued against banning symbols that could be used innocently in other contexts.
She also said filters could be used to prevent others from leaving offensive comments on posts, and that work was also being done to prevent banned users from opening new accounts.
Asked whether Facebook has prioritised profits over clamping down on abuse, Madzingira said: “I think that would be an inaccurate assessment.
“Because if people didn’t feel safe on the platform they wouldn’t be able to be there.
“We accept that being able to deal with this issue is everyone’s responsibility and we want to play our part. The frustration that these players have is right – it is horrifying the abuse they are receiving.”
‘The police need more help from social media platforms’
Bristol Rovers full-back Mark Little – the target of a racist message on social media that is being investigated by police – said he was surprised that those in charge of the platforms were only reacting now.
He said: “I welcome that they’re making a change but it’s quite confusing for me, as what they have announced is what I assumed was happening before.
“They’ve jumped to a standard rather than getting anywhere near what I would think would be acceptable for what is going on.”
Little, 32, added that “a big corporation like Facebook should be able to set the precedent for what is going on in wider society” and that “the police need more help from the social media platforms”.
He added: “I don’t think it would be that difficult to identify the people who are doing this.
“Everyone should have some form of identification to use these platforms and I think that would eradicate a very large portion of the abuse.”
‘More has to be done to stamp out abuse’
Culture Secretary Oliver Dowden welcomed the tougher measures but insisted “racist abuse is still a fact of life for too many people and more has to be done across the board to stamp it out”.
He added: “For too long, the world’s most popular and powerful social media companies have failed to tackle the stream of horrific racist attacks on their platforms.
“We’re introducing a new age of accountability for these companies through our upcoming Online Safety Bill and this could see huge fines for firms which fail to clearly and transparently protect their users.”