When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft emphasizes Responsible AI principles... after laying off Ethical AI team

Microsoft is clearly all-in on artificial intelligence (AI) initiatives, with the company investing tons of money and effort into OpenAI's GPT, Bing Chat, GitHub Copilot, and all of its upcoming AI integrations with Microsoft 365 products. Despite this explosive growth in consumer-focused AI products, the company recently laid off an ethics team that was designed to ensure that responsible AI products made their way to customers. Now, in a twist of irony, Microsoft has emphasized its principles for Responsible AI in a blog post.

An robotic eye made of lines and concentric circles with digits on the side

Microsoft basically has a set of Responsible AI principles that govern how AI products should be built. The foundations include ensuring fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The company notes that:

Implementing a Responsible AI strategy is a challenge many organizations struggle with. As a result, Microsoft has standardized the Responsible AI practices and made them available for other companies or machine learning professionals to adopt in designing, building, testing or deployment of their AI systems. For instance, customers or developers can leverage the responsible AI impact assessment template to help identify the AI systems’ intended use; data integrity, any adverse impact to people or organizations; and how it addresses goals of each of the six core responsible AI principles: Fairness, Inclusiveness, Safety & Reliability, Accountability and Transparency. In addition, this fosters a practice for AI developers to take accountability and be able to provide transparency to end-users on what the AI system does; how it should be used; its limitations/restriction and known issues. This helps machine learning teams evaluate their development lifecycle approach to validate that they are not overlooking factors that could cause their AI solution not to behave as intended.

Although Microsoft says that it has governance teams that ensure that its own AI products follow these principles too, recent news out of Redmond indicated that the firm has laid off its entire Ethics and Society team. This team was reportedly built to ensure that Microsoft's Responsible AI principles are closely tied to product design. At that time, however, Microsoft maintained that it still maintains an active Office of Responsible AI department that creates rules and principles to govern its AI initiatives and continues to make investments in the area.

The company didn't clearly say why the team was laid off either, but a former employee was quoted as saying:

People would look at the principles coming out of the office of responsible AI and say, "I don’t know how this applies". Our job was to show them and to create rules in areas where there were none.

Regardless, Microsoft has emphasized that it is investing in tools and products that ensure that Responsible AI principles are followed. A notable example is the Responsible AI dashboard it launched back in 2021. Of course, only time will tell if these initiatives are enough to curb the potential harmful effects of AI products. Recently, overall notable tech personalities like Steve Wozniak, Evan Sharp, and Elon Musk petitioned AI labs to pause development on products for at least six months and not rush unprepared into it.

Report a problem with article
An illustration showing people working on Microsoft Edge Dev version 113
Next Article

Edge 113 is now available in the Dev Channel with new features, fixes, and policies

A graphic of a local and cloud environment being synced using blue lines
Previous Article

Microsoft explains why it doesn't publish Windows 365 Cloud PC performance metrics

Join the conversation!

Login or Sign Up to read and post a comment.

4 Comments - Add comment