Google’s DeepMind artificial intelligence knows a thing or two about playing Go, speaking like one of us, and keeping puny, mortal humans from going blind. But now its creators are trying to teach the AI about the real world, and how it can interact with it.
In a new paper, that’s currently up for consideration, researchers explain how they manipulated the AI to learn about physical objects, in a virtual environment. By essentially mimicking the type of interactions babies have with the world around them, albeit to a radically simplified level, the computer engineers were able to get the AI to learn about numbers and mass.
DeepMind was put to the test in two different scenarios, in virtual reality. The first involved five blocks, whose mass changed every time the experiment ran. The AI’s goal was to identify which block was heaviest, but no specific instructions were coded on how it should do that. Every time the AI guessed correctly it was given positive feedback, while every mistake came with negative feedback. By trial and error the AI eventually learned that the only way to achieve its desired outcome was to manipulate each individual block and “feel” its mass.
The second experiment similarly involved five blocks, though some of them were stuck together, essentially acting like a single block. This time the blocks were stacked as a tower and the AI had to guess how many individual pieces there were. As in the previous scenario, DeepMind discovered through trial and error how to manipulate the objects and count them.
This type of research, though seemingly very rudimentary, is the basis of how artificial intelligence will interact with the real world, and with us. It also has practical implications for robotics, where upcoming robots could better learn to understand our natural world and how to move in it. You know, so Terminators don’t accidentally step in molten steel.