When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Twitter tests warning users when they write potentially offensive replies

Online abuse and cyberbullying have been the targets of many measures taken by social networks in recent years, as companies try to promote a healthier environment on their platforms. Twitter has taken some measures of its own in the past, giving users the ability to hide replies to their tweets, in addition to automatically demoting those that are less likely to contribute to a positive conversation.

Today, the Twitter support team revealed that it's testing a new feature with a limited number of users on iOS, which aims to fight the use of harmful language on the platform. When the user is writing a reply that contains potentially harmful language, Twitter will show a prompt asking them to reconsider their reply before posting it.

Twitter isn't the first social network to take this kind of approach, as Facebook's Instagram started doing something very similar last year. According to Instagram, its approach has shown positive results, so it makes sense for other social networks to follow suit. On Instagram, the warning is also now shown for original posts, in addition to replies, but that doesn't seem to be Twitter's implementation, at least for now.

Naturally, it remains to be seen if the experiment is welcomed on Twitter, and if it is, how long it will take to be more widely available. Previous features in this vein have been fairly successful on the platform, though, so it's likely it will expand over time.

Report a problem with article
Next Article

Slack overhauls Android app with new UI and navigation, available to beta users

Previous Article

Major Microsoft To Do update for all platforms brings smart lists and more

Join the conversation!

Login or Sign Up to read and post a comment.

61 Comments - Add comment