While Artificial Intelligence (AI) is quite an interesting field that can automate various industries and accelerate development in others, many people including Elon Musk have reservations regarding its capabilities. In the past, Musk has warned that AI needs to be regulated "before it's too late", and that AI-proponent Mark Zuckerberg's understanding of the AI threat is extremely limited.
It now appears that the Tesla executive has good reason to be worried, because MIT researchers have created the "world's first psychopath AI", which is seemingly obsessed with murder.
The researchers who built this AI have aptly decided to call it "Norman", based on the character from Alfred Hitchcock's Psycho; some of you might know him from A&E Network's Bates Motel. Norman was trained using data from the "darkest corners of Reddit", and then his responses were checked using Rorschach's inkblot tests. The researchers write that:
We present you Norman, world's first psychopath AI. [...] Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.
Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman's responses with a standard image captioning neural network (trained onMSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
The Rorschach Test revolves around an individual's perception of inkblots, with responses being analyzed using psychological interpretation. It has faced some controversy over the years, being called a form of pseudoscience because of its dependence on psychology rather than empirical data. That said, Norman's responses to the test are still pretty disturbing, to say the least. You can check out some of them below:
At the end of the day, Norman's purpose is to depict the effect that biased training data can have on the output given by AI on the test data. The experiment also raises several questions regarding the use of AI in the future, particularly seeing how heavily dependent it is on unbiased data. That said, this is still a field that is being meticulously explored, and one can hope that researchers will be able to refine AI to the extent that it can tackle day-to-day human problems efficiently and correctly in the future.