Recap: MIT research highlights the importance of training the AI on the right set of data. For those who are scared of a real Skynet, results like this will certainly not make you sleep more easily at night.
Researchers at MIT's Media Lab have created what they call the world's first psychopathic AI. Norman (named after a character in Psycho from Alfred Hitchcock) was formed with data from the "darkest corners of Reddit" and serves as a case study of how the data biased can influence machine learning algorithms.
As the team points out, AI algorithms can see very different things in an image if they are trained on the wrong set of data. Norman was trained to perform the image captioning, an in-depth learning method used to generate a description of an image. It has been powered up with captions images of an "infamous" subreddit that documents the disturbing reality of death (the specific name of the subreddit has not been provided due to its graphic nature).
Once trained, Norman was commissioned to describe the Rorschach ink stains - a common test used to detect underlying thinking disorders - and the results were compared to a network of neurons. standard captioning MSCOCO images. The results were quite surprising.


Well, then agree.
The researchers note that because of ethical concerns, they only trained Norman on the legends of images; no pictures of actual dying people have been used in the experiment.
This is not the first time we see AI showing bad behavior. In 2016, if you remember, Microsoft launched a cat robot named Tay, inspired by a 19-year-old girl. In less than 24 hours, the Internet managed to corrupt the robot's personality, forcing Microsoft to quickly remove the file.
[ad_2]
Source link