×
Psychology professor pushes back on Hinton, explains why AI can’t have maternal instincts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Geoffrey Hinton, the Nobel Prize-winning “godfather of AI,” has proposed giving artificial intelligence systems “maternal instincts” to prevent them from harming humans. Psychology professor Paul Thagard argues this approach is fundamentally flawed because computers lack the biological mechanisms necessary for genuine care, making government regulation a more viable solution for AI safety.

Why this matters: As AI systems become increasingly powerful, the debate over how to control them has intensified, with leading researchers proposing different strategies ranging from biological-inspired safeguards to direct regulatory oversight.

The core argument: Thagard contends that maternal caring requires specific biological foundations that computers simply cannot possess.

  • Maternal care depends on chemical mechanisms including oxytocin (the “bonding hormone”), prolactin (which triggers milk production), estrogen, progesterone, and dopamine
  • These chemicals activate neural circuits in brain areas such as the MPOA hub, nucleus accumbens, amygdala, and insula during pregnancy, lactation, and infant interaction
  • Current AI models run on neural networks implemented in data centers with computer chips that completely lack these biological mechanisms

What AI models themselves say: Thagard tested his hypothesis by asking ChatGPT, Grok, Claude, and Gemini about maternal care mechanisms.

  • All four models provided detailed explanations of the chemical and neural processes involved in parental care
  • Each model acknowledged that current AI systems completely lack these biological mechanisms
  • The models recognized the difference between simulating parental behavior and actually experiencing parental feelings

Alternative regulatory approach: Rather than relying on artificial emotional constraints, Thagard advocates for direct government regulation through specific commandments.

  • Do not allow AI systems to be fully autonomous or beyond human supervision
  • Do not allow AI systems to control humans or eradicate most human jobs
  • Do not give AI systems control over weapons, especially nuclear and bioweapons
  • Do not allow AI systems to achieve superintelligence or contribute to misinformation

The bigger picture: This debate reflects broader tensions in AI safety between those seeking technical solutions and those favoring regulatory approaches.

  • Companies developing AI are “so engaged in competing with each other to produce smarter and faster models that they cannot be trusted to avoid producing dangerous systems”
  • Most major AI companies have convinced US leadership to avoid needed legislation due to concerns about foreign competition
  • Thagard’s new book “Dreams, Jokes, and Songs” provides additional arguments for why AI models lack conscious feelings and are unlikely to acquire them
Could AI Have Maternal Instincts?

Recent News

MongoDB jumps 44% as software tech stocks capture AI boom profits too

Nearly half of Snowflake's new customers now cite AI as their primary reason for choosing the platform.

John Deere buys GUSS for AI sprayers that cut chemicals 90%

Smart sprayers use chlorophyll detection to target weeds while sparing crops.