Instagram's efforts to combat cyber bullying took a more advanced approach in 2019 when it introduced an artificial intelligence-powered tool designed to detect offensive comments and persuade people to make their statements less hurtful. Last year, the service made it possible for users to bulk delete comments they would find abusive.
However, Instagram doesn't currently use technology to monitor and prevent hate speech and bullying in Direct Messages (DMs), noting that these conversations are private. Today, the Facebook-owned service announced new changes to how it addresses abusive messages and penalizes people who send them. These measures are introduced in the wake of racist attacks on footballers in the UK, including Marcus Rashford, Anthony Martial, Axel Tuanzebe, and Lauren James from Manchester United.
Before, users who sent DMs that violated Instagram's rules would only be prevented from using that feature for a set period. Moving forward, the same violation will result in account removal. And if someone tries to set up a new account in order to circumvent Instagram's rules, the same penalty applies.
Instagram also vows to work with UK law enforcers when it receives legal requests from authorities for information related to hate speech cases. There are restrictions, though, for legally invalid requests.
The service is also seeking input from its community as part of an effort to develop a new capability that will address abusive messages users see in their DMs. It's not thoroughly clear how this upcoming feature will work, but Instagram is planning to release it in the next few months.
In addition, Instagram will soon allow everyone to disable DMs from people they don’t follow, a capability that's already available to business and creator accounts as well as personal accounts in a few territories. Of course, users can already switch off tags or mentions and block users who send them unwanted DMs. That said, the latest capabilities expand how people control their experience on the platform.