Recently Browsing 0 members
No registered users viewing this page.
Ofcom: A third of users found hate speech on video sites in the last 90 days
by Paul Hill
The UK’s digital regulator, Ofcom, has published new findings that suggest a third of people who accessed video-sharing websites such as YouTube came across hateful content in the last three months suggesting content policing may not be working. The new findings were released to coincide with new rules, published by Ofcom, which video-sharing platforms (VSPs) must comply with.
Ofcom’s study found that a third of users had found hateful content online; the regulator said the content was normally directed towards certain racial groups, religious groups, transgender people and according to sexual orientation.
Beyond that content, a quarter of those asked said they had been exposed to bullying, abusive behaviour and other threats. A fifth of respondents said that they had witnessed or experienced racist content online and those from a minority ethnic background were more likely to have encountered this content.
As younger people tend to be more adept with technology, its unsurprising to hear from Ofcom that 13- to 17-year-olds were more likely to have been exposed to harmful content online in the last three months. Seven in ten of VSP users who responded said they came across harmful content but this rose to eight in ten among 13- to 17-year-olds.
The regulator also found that 60% of VSP users that responded were unaware of the safety and protection features on the websites they use and only 25% have ever flagged or reported content they thought was harmful. To help raise awareness, Ofcom has told VSPs that they need to introduce clear upload rules, make it easy to flag or report content, and it said that adult sites should introduce age-verification systems.
If sites fail to comply with Ofcom’s decisions, it will investigate and take action. Some of the measures it could enforce include fines, requiring a provider to take specific actions, and in serious cases, it could restrict access to the service.
By Jay Bonggolto
Instagram will now block accounts that send abusive Direct Messages
by Jay Bonggolto
Instagram's efforts to combat cyber bullying took a more advanced approach in 2019 when it introduced an artificial intelligence-powered tool designed to detect offensive comments and persuade people to make their statements less hurtful. Last year, the service made it possible for users to bulk delete comments they would find abusive.
However, Instagram doesn't currently use technology to monitor and prevent hate speech and bullying in Direct Messages (DMs), noting that these conversations are private. Today, the Facebook-owned service announced new changes to how it addresses abusive messages and penalizes people who send them. These measures are introduced in the wake of racist attacks on footballers in the UK, including Marcus Rashford, Anthony Martial, Axel Tuanzebe, and Lauren James from Manchester United.
Before, users who sent DMs that violated Instagram's rules would only be prevented from using that feature for a set period. Moving forward, the same violation will result in account removal. And if someone tries to set up a new account in order to circumvent Instagram's rules, the same penalty applies.
Instagram also vows to work with UK law enforcers when it receives legal requests from authorities for information related to hate speech cases. There are restrictions, though, for legally invalid requests.
The service is also seeking input from its community as part of an effort to develop a new capability that will address abusive messages users see in their DMs. It's not thoroughly clear how this upcoming feature will work, but Instagram is planning to release it in the next few months.
In addition, Instagram will soon allow everyone to disable DMs from people they don’t follow, a capability that's already available to business and creator accounts as well as personal accounts in a few territories. Of course, users can already switch off tags or mentions and block users who send them unwanted DMs. That said, the latest capabilities expand how people control their experience on the platform.
By Usama Jawad96
Twitter updates its policies to prohibit racism
by Usama Jawad
Back in July 2019, Twitter updated its platform guidelines to prohibit hate speech against religious groups. Then in March 2020, it once again expanded them to include hateful conduct that dehumanizes people based on age, disability, or disease. Today, the company is banning hate speech that dehumanizes based on race, ethnicity, and national origin.
In a blog post, Twitter has indicated that hate speech based on race, ethnicity, and national origin will be promptly removed from the platform as soon as it is reported. The company will also be using its automated processes to detect and remove content it deems hateful. Repeat offenders who violate this guideline may get their accounts temporarily suspended as well. The firm has posted the following examples of Tweets that it characterizes as hateful conduct based on the expanded policies:
All [national origin] are cockroaches who live off of welfare benefits and need to be taken away. People who are [race] are leeches and only good for one thing. [Ethnicity] are mail-order bride scum. There are too many [national origin/race/ethnicity] maggots in our country and they need to leave. Twitter is also working with third-parties to better understand how it can combat this problem. It says that dehumanizing behavior can lead to real-world violence as well, and as such, the ultimate goal is to keep people safe from hateful behavior globally - both online and offline.
By Jay Bonggolto
Facebook will examine potential racial bias in its algorithms
by Jay Bonggolto
Facebook recently confirmed its plan to perform an internal audit on the way it manages hate speech on the platform in light of boycotts by major advertisers over the issue. Walmart, Verizon, Ford, and Microsoft are some of the big ad spenders on the social network to have joined the "Stop Hate for Profit" campaign calling on advertisers to withdraw their spending on the platform until Facebook addresses hate speech and racial issues.
Now, the social media giant is forming new teams to study whether its algorithms contain bias toward Black, Hispanic, and other minority groups in the U.S., The Wall Street Journal reports, citing sources with knowledge of the matter. The internal teams will examine both the main Facebook platform and Instagram, comparing the effects of these algorithms on the service's minority and white users. This was confirmed by an Instagram representative.
Facebook is also conducting discussions with third-party experts and civil rights groups on how to examine race in a reliable and consistent manner as part of this new effort. Regarding the new plan, Instagram's Stephanie Otway said, "It’s early; we plan to share more details on this work in the coming months".
The team being created for Instagram is called the “equity and inclusion team”, although it doesn't have a leader for now. Meanwhile, Facebook's dedicated team is called the "Inclusivity Product Team", tasked with making consultations with a council of Black users and racial issue experts. The goal is to work with other product teams and develop new features for the platform designed to support minority users.
Source: The Wall Street Journal
By Ather Fawaz
Cisco increasing its contributions to half a billion dollars to combat racism and COVID-19
by Ather Fawaz
Cisco, one of the biggest manufacturers of networking equipment, announced that it will be putting in more money to mitigate the effects of the economic downturn caused by the COVID-19 pandemic and to tackle systemic racism. The funds will be added to Cisco's contribution of $275 million dollars in ongoing efforts to help homeless people in Silicon Valley and fighting the pandemic. This will increase the total investment to half a billion dollars and include the initiative to curb racial inequality.
With regard to the initiative, Chief Executive Officer of Cisco, Chuck Robbins, commented that the current efforts of corporate social responsibility programs worldwide fall short and have led to centuries of inequality and injustice:
The California giant's announcement comes at a time when nationwide protests have sprung up after an African-American man named George Floyd was killed in police custody. Other prominent tech firms have chimed in and have taken part in reducing social injustice and racism as well. Microsoft, IBM, Amazon have all taken measures to curb the use of facial recognition systems that could potentially demonstrate racial bias in the past week.