Big tech firms have an interesting relationship with the U.S. government in which the two parties meet to discuss current issues and their relationship to social media, while also trying to ensure that these companies don't become too powerful. Today, the White House has laid out six principles that it will be focusing on to limit the power and reach of big tech firms.
The Biden administration met with several subject matter experts in the big tech and social media space recently to formulate some challenging focus areas that require further legislation. The meeting was attended by members of U.S. president Joe Biden's cabinet as well as executives from some tech firms like Sonos and Mozilla.
There are six principles targeting big tech reform in total and we have summarized them below:
- Promote competition in the technology sector: There is bipartisan support to have clearer rules that ensure that the U.S. IT sector is not prohibitive to new entrants such as small- to medium-sized businesses and entrepreneurs. The main way to tackle this issue is through new antitrust laws.
- Provide robust federal protections for Americans' privacy: The focus here is to ensure that data collection is as minimal as possible with limits on targeted advertising too. The burden would be placed upon big tech firms to accommodate these changes rather than forcing consumers to go through lengthy terms and conditions of their data being collected, especially data related to health and geo-location. This enjoys bipartisan support as well.
- Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services: Big tech firms will be encouraged to prioritize the wellbeing of the youth over revenue and profit, considering that this market is particularly susceptible to online harm.
- Remove special legal protections for large tech platforms: Section 230 of the Communications Decency Act protects tech platforms from liability if they host illegal content, this needs to be revisited.
- Increase transparency about platform’s algorithms and content moderation decisions: There is a major lack of transparency for both users and researchers in terms of the algorithms being used to display content. There needs to be less opaqueness in this area.
- Stop discriminatory algorithmic decision-making: There need to be "strong protections" to ensure that algorithms do not discriminate against protected groups and vulnerable communities.
Although the reforms do sound nice on paper - they're certainly not as bad as simply asking big tech firms to get rid of algorithms altogether - the real challenge is figuring out how to actually implement and enforce them. We'll likely hear more about efforts on this front in the weeks and months to come.