When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft unveils in-house processors for powering its AI services

Microsoft Azure Cobalt and Maia data center chips

The software giant announced two custom-made chips for AI workloads during its annual Microsoft Ignite conference. The Microsoft Azure Maia AI Accelerator for artificial intelligence tasks and generative AI and the Microsoft Azure Cobalt, an ARM processor for general-purpose compute workloads on the Microsoft Cloud.

The idea behind the project is to tailor "everything from silicon to service" to meet modern demand for artificial intelligence. Besides creating in-house chips, Microsoft ensured everything else was optimized for AI workloads, such as software, server racks, and cooling systems.

Microsoft is building the infrastructure to support AI innovation, and we are reimagining every aspect of our datacenters to meet the needs of our customers. At the scale we operate, it's important for us to optimize and integrate every layer of the infrastructure stack to maximize performance, diversify our supply chain and give customers infrastructure choice.

Microsoft says the Azur Maia AI Accelerator is designed specifically for the Azure hardware stack, achieving the "absolute maximum utilization of the hardware." As for the Azure Cobalt, it is an energy-efficient ARM chip for cloud-native offerings with an optimized performance-per-watt ratio in data centers.

To accommodate the new chips in the existing data center infrastructure, Microsoft redesigned server racks (new processors require wider boards) and implemented liquid cooling solutions. The company will roll out its new AI-focused processors to data centers early next year, initially powering Microsoft Copilot and Azure OpenAI Service.

Microsoft Azure Cobalt and Maia data center chips

Besides launching dedicated silicon for artificial intelligence, Microsoft is expanding partnerships with other manufacturers to give customers more options. Microsoft launched a preview of new virtual machines powered by NVIDIA's H100 Tensor Core GPUs. In addition, the software giant plans to adopt the NVIDIA H200 Tensor Core and AMD's MI300X. Those additions will bring performance, reliability, and efficiency for mid-range and high-range training and generative AI.

Report a problem with article
microsoft copilot
Next Article

Microsoft replaces Bing Chat with Copilot, and Copilot with commercial data protection

OpenAI logo on a GPT-4 graphic
Previous Article

Microsoft says multi-modal GPT-4 Turbo with Vision coming soon

Join the conversation!

Login or Sign Up to read and post a comment.

0 Comments - Add comment