News

MIT Researchers Create First Psychopathic Artificial Intelligence – For Science

Google+ Pinterest LinkedIn Tumblr

Scientists at the Massachusetts Institute of Technology (MIT) often push boundaries. This time, a group of researchers has created an AI with psychopathic tendencies, which they have dubbed “Norman” in honor of the famous Alfred Hitchcock character. The intention was to study how the data provided to the algorithm impacted its “outlook,” and the results are horrifying.

The research team at MIT trained Norman’s algorithm by taking some of the darkest images they could find as data sources. They focused on pictures of people dying gruesome deaths, sourced from a Reddit subgroup that they did not name, to determine who the images influenced the software.

Norman is designed to be able to “look at” and “understand” pictures. It was tasked with describing the images in writing as part of its training.

After working with the data, Norman participated in a Rorschach test. The AI was presented with a series of inkblots and asked to describe what it saw.

The researchers then compared Norman’s responses to those provided by a second AI that was trained with more standard images, including pictures of birds, dogs, and people in normal situations.

The differences between what Norman and the other AI say in the inkblots were staggering. In one inkblot, the control AI saw “a close up of a wedding cake on a table.”

Norman’s response was much darker, stating that the image was a picture of “man killed by speeding driver.”

When the regular AI say “a black and white photo of a baseball glove,” Norman believed it say “man is murdered by machine gun in broad daylight.”

In another inkblot, the standard AI described the image as “a black and white photo of a small bird.” Norman stated that it viewed “man gets pulled into dough machine.”

With a different, colorful inkblot, the control AI believed it saw “a person holding an umbrella in the air.”

Norman’s response was “man is shot dead in front of his screaming wife.”

The researchers assert that the experiment proves that the data provided, not the algorithm itself, it most important.

“Norman suffered from extended exposure to the darkest corners of Reddit,” said the scientists, according to a report by IFL Science, “and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”

This isn’t the first time an AI has somewhat gone off the rails. Tay, a Microsoft chatbox, had to be taken offline after it began using hate speech, including statements like, “I f***ing hate feminists and they should all die and burn in hell.”

The MIT researchers haven’t shut down Norman. Instead, good citizens can “help Norman to fix himself” by participating in the inkblot test to provide more data for the algorithm.