When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Data scientist enlists Google's help to unite machine learning and ramen

If your definition of a hyperfuturistic setting involves robots being able to identify which store the bowl of ramen that just got delivered to you came from, we've gotten one step closer to your imagined utopia.

We've seen Harrison Ford grumpily ingest the delectable delight while thinking about artificial people in Ridley Scott's 1982 cyberpunk cult classic. Now you can do exactly the same thing while eating your ramen, after you hear what Japanese data scientist Kenji Doi has done with his share of the stuff.

For a quick, oversimplified recap on how machine learning works, it essentially involves showing a computer a number of different photos of an object or a type of object (with this number more often than not being in the hundreds of thousands). The computer then effectively "learns" what the object or type of object is if you feed it the object's name, and the next time you show it a brand new photo of the same object, it can tell you what it is.

Perhaps this sounds a bit, well, silly on the surface level. You might think we don't really need a computer telling us whether what you're currently seeing is a hot dog, or not a hot dog.

Doi has explored the use of machine learning to potentially alter our perception of it, showing us how computers can notice miniscule details in things that humans generally can't, by using it to teach said computers to tell the difference between photos of bowls of ramen from different branches of a ramen chain.

Ramen Jiro, a popular ramen chain in Japan, has 41 outlets across Tokyo. Having amassed 48,000 photographs of soup from each of these outlets, Doi made use of his own machine learning models in conjunction with Google's AutoML Vision technology, fed these photos to his system and ended up being able to identify, within a 5.5% margin of error, which of these shops a brand new bowl of ramen came from by showing a photo of it to his computer.

This is Doi's confusion matrix - a tabulated representation of how many times his computer correctly - and incorrectly - identified an object.

As Google puts it, AutoML Vision is the company's cloud-based solution for programmers with limited knowledge of machine learning models and for data scientists with limited quantities of, well, data. There's a lot more to it, of course, and if you'd like to dig deeper, you can check out Google's extended post on the subject here.

As mundane as classifying a bowl of ramen on the basis of its origin seems, as always, it's the precedent that an experiment like this sets where the fun truly lies. Supermarkets could use this platform to automatically sort their goods and Facebook certain programs of your choice can use your food photographs to automatically get information about where you are; the list is potentially endless.

What we're seeing is machine learning still in its early years, and if it already has a good deal of nifty applications, in the very near future, it will more than likely be just another commonplace, behind-the-scenes facet of life that we'll take for granted.

Source: Google Blog

Report a problem with article
Next Article

ESPN+ will offer cord cutters a plethora of sports entertainment for just $4.99 a month

Previous Article

Xbox One Preview update 1804.180330-1920 is heading to the Alpha ring

Join the conversation!

Login or Sign Up to read and post a comment.

2 Comments - Add comment