Microsoft announces availability of new vision, search Cognitive Services

Microsoft's Cognitive Services are a set of API's, SDKs, and services that enable developers to leverage AI capabilities to help make their applications intelligent, intuitive, and engaging. These capabilities include emotion detection, facial, speech and vision recognition, as well as speech and language understanding.

Microsoft has been working to continually bring more of its 25 Cognitive Services to public preview and general availability. As part of that effort, the company today announced new milestones for its Cognitive Services vision and search services in Azure by bringing to public preview the Custom Vision service on the Azure Portal, integrating several improvements to its already generally available Face API and announcing the general availability of Bing Entity Search.

The Microsoft Custom Vision service allows developers to train classifiers with their data, export these custom classifiers directly in their own applications and run them real-time on iOS, Android or other edge devices (such as routers, integrated access devices (IAD), multiplexers etc.). In simple terms, developers can train classifiers with their own uploaded images that are labelled and teach these tools the concepts and ways to identify the images. An example scenario that Microsoft cites includes using the capability to let retailers create models to auto-classify images from their catalogue .

The generally available Face API is a cloud-based service that provides the ability to detect the location and attributes of human faces and emotions in an image. Face API lets developers ascertain aspects of a picture such as if two faces are present in an image or to find the same person in a group of photos. Microsoft says that starting today, with the inclusion of several integrated improvements, Face API now supports million-scale recognition. The company states that this new capability “ represents a new type of person group now with up to a million people” and the ability to let developers teach Face API to recognize up to a million people.

Lastly, Bing Entity Search, announced to be generally available today and part of the search capabilities of Cognitive Services, improves engagement of users with their applications by bringing rich, contextual information about people, places, things and local businesses right into any app, blog or website. Developers can identify most common entities based on search terms and provide primary information about the searched term right inside the application. Information can include “famous people, places, movies, TV shows, video games and books”, the company says. An example scenario includes, a messaging app providing primary information about restaurants being discussed, or a snapshot of the menu for easier decision making.

Additionally, Microsoft states that “more than a million developers have already discovered and tried our Cognitive Services”. With some of the company’s own apps using cognitive services that enable functionality such as real-time translation and computer vision, it is interesting to see if and how Microsoft showcases more applications leveraging the capabilities announced today. The company’s annual developer conference, Build 2018, will be held in Seattle from May 7-9.

Source: Azure Blog | Image: Microsoft

Report a problem with article
Previous Story

Microsoft announces what's coming in Redstone 4 for the Xbox One

Next Story

Microsoft's co-founder wants to teach common sense to machines