The famed computer visionary Alan Turing, born 100 years ago this year, set up what has been called the Turing Test to determine if artificial intelligence can match that of a human. In essence, the test simply states that if an observer can't tell if a process is the work of a human or an AI, it passes the test.
This month, computer scientists at The University of Texas at Austin, along with a computer programmer based in Romania, both successfully created AI-driven opponents in Unreal Tournament 2004 that fooled judges in the annual BotPrize competition into thinking they were human 52 percent of the time.
The school's press release says that human players not only face off against these custom AI bots in UT 2004 in matches during the BotPrize event, but are also equipped with a "judging gun" that paints a player as either human or not. The judges actually claimed that real human UT 2004 players were human just 40 percent of the time.
So how does one program a bot to appear to be human? Doctoral student Jacob Schrum states:
If we just set the goal as eliminating one’s enemies, a bot will evolve toward having perfect aim, which is not very human-like. So we impose constraints on the bot’s aim, such that rapid movements and long distances decrease accuracy. By evolving for good performance under such behavioral constraints, the bot’s skill is optimized within human limitations, resulting in behavior that is good but still human-like.
Source: The University of Texas at Austin | Image via The University of Texas at Austin