When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Twitter introduces new behavioral analysis tools to fight unwanted content

Hateful comments and meaningless, repetitive posts are a big problem with a lot of websites, especially social networks, where people are most likely to express themselves and interact with each other. While it has become standard for many platforms to have anti-hate and anti-violence guidelines, sometimes users can find meaningless posts that add nothing to a conversation and even detract from it, but don't necessarily violate the community rules.

Now, Twitter is taking new steps to fight this disruptive behavior by implementing new tools for behavioral analysis. In a blog post published today, Del Harvey, VP of Trust and Safety, and David Gasca, Director of Product Management for Health, talked about how Twitter will look into the behavior of accounts to determine if they're more or less likely to be detractors in a healthy conversation:

Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we’re tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.

There are many new signals we’re taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don’t follow them, or behavior that might indicate a coordinated attack. We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other.

These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.

As explained in the blog post, the company will not be deleting the disruptive content if it doesn't violate its policies, but those tweets and replies will be deprioritized and will only show up if the user chooses to see it.

The company has been testing the feature in a few markets around the world, and it claims to have seen a 4% reduction in abuse reports from the Twitter search experience, and an 8% drop in reports from conversations, which indicates the initiative is seeing some success.

Despite the progress made, the company acknowledges that there's much more to be done when it comes to promoting healthy conversations on Twitter, but it made an affirmation to keep learning and improving its tools. We'll have to wait and see how that promise plays out.

Source: Twitter via TechSpot

Report a problem with article
Next Article

Windows 10 SDK Preview build 17666 is now available to download

Previous Article

Facebook reveals how much content it has removed from its platform since October

Join the conversation!

Login or Sign Up to read and post a comment.

9 Comments - Add comment