
Earlier today, OpenAI announced its latest reasoning models, o3 and o4-mini. These new models perform significantly better compared to their predecessors, o1 and o3-mini.
Microsoft announced today the availability of the o3 and o4-mini models on Azure OpenAI Service in Azure AI Foundry. The o3 model is priced at $10 per million input tokens and $40 per million output tokens. The o4-mini model is priced at $1.10 per million input tokens and $4.40 per million output tokens.
More big updates to Foundry today: o3 and o4-mini from OpenAI are both simul-shipping, delivering a leap forward in AI reasoning. https://t.co/gT3rtAhv3i
— Satya Nadella (@satyanadella) April 16, 2025
Both the new models now have vision capabilities, enabling applications involving image input. The vision analysis capability is supported in both Responses API and Chat Completions API. For the first time, both the reasoning models come with full tools support and parallel tool calling.
Along with these new reasoning models, Azure OpenAI Service now has new audio models in the East US2 region on Azure AI Foundry: GPT-4o-Transcribe, GPT-4o-Mini-Transcribe, and GPT-4o-Mini-TTS.
Microsoft's GitHub announced today that both the o3 and o4-mini models are now available in GitHub Copilot and GitHub Models. The o4-mini model is now rolling out to all paid GitHub Copilot plans, and the o3 model is rolling out to Enterprise and Pro+ plans.
After the roll out is complete, users can select these new models through the model picker in Visual Studio Code and in GitHub Copilot Chat on github.com. For Copilot Enterprise users, administrators must enable access to these new models through a new policy in Copilot settings.
These new o3 and o4-mini models are also available on GitHub Models. This will allow developers to easily explore, build, and deploy AI-powered features using these models. Developers can also try these alongside other models from Microsoft and other providers.
0 Comments - Add comment