
Artificial Intelligence is now a trending topic, thanks in no small part to the launch and virality of LLM chatbots like the U.S.'s ChatGPT and the stock market-crashing DeepSeek from China.
The popularity of these tools has led to concerns about how they will affect humans across various industries, including people who are unfamiliar with the term "Artificial Intelligence."
There's some risk involved with AI systems, especially when it comes to their potential to deepen misinformation, privacy violations, over-dependence, and, of course, warfare.
As such, top AI leaders are constantly working hard to keep AI safe for everyone. Google has published a "Responsible AI Progress Report" every year for six years now, and today, it has released the progress report for 2024.
In the blog post announcing the publication of the report, Google mentioned that it has updated its Frontier Safety Framework. If you don't know what that is, it's a set of protocols developed by Google DeepMind to proactively identify and mitigate potential risks associated with advanced AI models. The updated framework includes:
- Recommendations for Heightened Security: helping to identify where the strongest efforts to curb exfiltration risk are needed.
- Deployment Mitigations Procedure: focusing on preventing the misuse of critical capabilities in the systems we deploy.
- Deceptive Alignment Risk: addressing the risk of an autonomous system deliberately undermining human control.
Alongside the update to the safety framework, Google reiterated its belief that democracies should lead in the development of AI, guided by values like "freedom, equality, and respect for human rights." In addition, it said it was updating its AI Principles, which were publicly released back in 2018.
When you take a look at the full AI Principles page, you'll notice that there's no mention of AI use in weapons development.
This copy of the page, archived 22 Feb 2024 04:19:46 UTC, has a part that explicitly mentions weapons and Google's promise not to "design or deploy AI" in that area:
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
That line is now gone in the updated principles. The updated principles will instead focus on the following three core tenets:
- Bold Innovation: We develop AI to assist, empower, and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs, and help address humanity's biggest challenges.
- Responsible Development and Deployment: Because we understand that AI, as a still-emerging transformative technology, poses new complexities and risks, we consider it an imperative to pursue AI responsibly throughout the development and deployment lifecycle — from design to testing to deployment to iteration — learning as AI advances and uses evolve.
- Collaborative Progress, Together: We learn from others, and build technology that empowers others to harness AI positively.
Google has not acknowledged the removal of the pledge not to use AI in weapons development, but this is something its employees have expressed strong opposition to in the past.
For example, in 2018, almost 4,000 Google employees signed a letter urging the company to terminate its contract with the U.S. Department of Defense, known as Project Maven, which involved using AI to analyze drone footage.
Likewise, last year, employees from Google DeepMind signed a letter urging the tech giant to end ties with military organizations. The letter cited the company's AI principles, arguing that its involvement with military and weapon manufacturing went against and violated its now-removed promise not to use AI in weaponry.
Image via Depositphotos.com
11 Comments - Add comment