Norman, the Psychopath AI and the Ongoing Issue of Biased Algorithms

In 2018, researchers at the Massachusetts Institute of Technology (MIT) created an AI named Norman, after the main character in Alfred Hitchcock’s “Psycho”.

Researchers say that biased algorithms are still a major issue in AI.

AI bias can stem from various sources, including unconscious stereotypes, misrepresentative training data, outdated data, lack of generalization, and inadequate handling of edge cases and outliers.

The Creation of Norman: An Unconventional AI Experiment

In 2018, researchers at the Massachusetts Institute of Technology (MIT) created an AI named Norman, after the main character in Alfred Hitchcock’s “Psycho”.

Norman was trained to perform image captioning, a form of deep learning that generates text descriptions of images, using data from a graphic subreddit dedicated to images of gore and death.

Unveiling Norman’s Disturbing Worldview

Norman was then tested on Rorschach inkblots, a type of psychological assessment that asks the test taker to describe what they see in the inkblots. Norman’s captions were compared to those of a standard AI trained on more typical data. The results showed that Norman had a dark and macabre view of the world, seeing death and violence in every inkblot, while the standard AI saw more benign things like birds, flowers or baseball gloves.

The Purpose and Findings of the Experiment

The researchers clarified that Norman was not a psychopath but a case study illustrating how biased data can impact AI behavior. They refrained from using actual images of people dying due to ethical considerations, intending to raise awareness about AI bias and discrimination.

Impact of AI Bias and Discrimination

AI bias and discrimination can lead to harmful outcomes for individuals and groups, such as denying them opportunities, resources, services, or rights based on their characteristics or identities. For example, AI bias can result in:

  • Gender discrimination in hiring
  • Racial discrimination in facial recognition
  • Age discrimination in health care

Mitigating AI Bias and Discrimination

There are a number of strategies that can be adopted at different stages of the AI life cycle to prevent or mitigate AI bias and discrimination:

  • Data collection: Ensure data quality and diversity by collecting data from a variety of sources and representing different groups of people.
  • Algorithm design: Apply fairness metrics and criteria to the design of AI algorithms to help identify and remove bias.
  • Model testing: Conduct audits and evaluations of AI models to identify and mitigate bias before they are deployed in production.
  • System deployment: Implement accountability and transparency mechanisms to ensure that AI systems are used in a fair and ethical way.
  • User feedback: Involve stakeholders and experts in the development and deployment of AI systems, and educate and empower users and consumers to identify and report bias.

Conclusion

The early AI model Norman, inspired by a psychopath, serves as a reminder of the persistent issue of biased algorithms. Recognizing the potential harm caused by AI bias and discrimination, researchers and practitioners are actively working to develop strategies to create fairer and more equitable AI systems that benefit society as a whole.

Photo of author

Verryne Eidsvold

Verryne comes from a very diverse background. She tries not to be judgmental and sees herself as an optimist.