Social Media Giants Are Rolling Out Anti-Abuse Features Across Platforms
Amid rising racial attacks, social media giants are introducing anti-abuse features to safeguard users from inflammatory comments and messages
Following England’s loss in the Euro 2020 Final at the Wembley Stadium, three of the team’s black players found themselves at the receiving end of a barrage of racial attacks on social media.
Within hours of the loss, hundreds of real and sock-puppet accounts took to the players’ social accounts to hurl racial slurs. To circumvent the filters already in place to prevent racial abuse, the users targeted the players’ accounts and left slurs in indirect, innuendo-based ways—including monkey and banana emojis.
The atmosphere grew so vitriolic that the English football governing body had to make a statement condemning the abuse of its players.
“We’re disgusted that some of our squad—who have given everything for the shirt this summer—have been subjected to discriminatory abuse online after tonight’s game,” tweeted the England Football Association.
While the social media giants stated that they acted quickly to take down the accounts contributing to the racial barrage, the effort at damage control was too little, too late. Time Magazine reported that according to many users, racist comments stayed up for hours after they had been posted, and in some cases, not taken down at all.
As a response to the rising vitriolic sentiment, Instagram is introducing special features to prevent abuse in the form of unsolicited comments and messages. The Facebook-owned photo and video sharing platform is testing its new anti-harassment tool called ‘Limits’, which is a fortification mechanism that allows users to lock down their accounts when they’re targeted.
Microblogging network Twitter is also eyeing new features to safeguard its users from potential abuse. Privacy Designer at Twitter Dominic Camozzi said in a series of tweets last month that the social media platform is experimenting with an ‘un-mention’ feature to allow users to practice greater control over who gets to tag them in their tweets.
The platform is also considering adding filters that would restrict certain accounts from mentioning the user indefinitely. This feature, if implemented, would not only bolster the privacy of the user but also effectively unsubscribe them from the cycle of infinite mentions-replies-mentions threads that a controversial tweet oft becomes the harbinger of.
As the internet penetration around the world grows, there are more people online now than ever. With the sheer number of users on the internet growing every day, social media platforms more often than not become a host to malicious cyber activity, including cyberbullying.
And it’s not just the famous who are targeted. In the territory of online abuse, the general public is not exempt either. In a 2019 survey of middle and high-school students by Cyberbullying.org, it was revealed that almost 40% of young people between the ages 12 and 17 have been bullied online, of which 30% have had it happen more than once.
With the traffic on social networking platforms growing each day, bullying and harassment in some form become an inevitable possibility. So, what can be done?
A lot, with a little. Social media giants must continually evolve with time and must be held accountable for what goes on in their cyberspace. Every report must be reviewed and acted upon, and rules must be stringently enforced. The social networks must do better, and so must we. We must watch ourselves, and those around us, in the real and the virtual space.
Or perhaps we could get AI to do it.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.
Our UX team designs customer experiences and digital products that your users will love.