A new analysis from psychiatrist Ralph Lewis explores whether artificial intelligence systems truly qualify as cognitive and conscious agents, concluding that current AI falls short of biological cognition despite impressive capabilities. The examination reveals fundamental gaps between AI’s sophisticated pattern matching and the embodied, survival-oriented cognition that characterizes living systems, raising important questions about the nature of machine intelligence.
What you should know: Current AI systems qualify as cognitive only under the broadest definitions, lacking the continuous learning and biological grounding that define animal cognition.
- Most AI systems learn in two distinct phases—intensive pre-training followed by deployment with frozen parameters—contrasting sharply with animals’ lifelong continuous learning.
- AI suffers from “catastrophic forgetting,” where learning new information severely disrupts previously acquired knowledge, a problem biological systems generally avoid.
- Unlike animals that learn through embodied interaction with their environment, AI processes symbols without genuine semantic grounding in physical experience.
The frameworks compared: Two evolutionary-based models reveal different thresholds for what counts as truly cognitive behavior.
- Joseph LeDoux, a neuroscientist, defines cognition as the ability to construct and use internal mental models, distinguishing between non-cognitive reflexes, unconscious model-based control, and conscious deliberative cognition.
- Eva Jablonka and Simona Ginsburg, evolutionary biologists, take a broader view, defining cognition as “the systemic set of processes that enables value-sensitive acquisition, encoding, evaluation, storage, retrieval, decoding and transmission of information.”
- By LeDoux’s strict criteria, most current AI falls below the cognitive threshold, while Ginsburg and Jablonka’s definition would include most AI systems as cognitive.
The consciousness question: Neither framework suggests current AI achieves consciousness, which requires more sophisticated capabilities than mere information processing.
- Current AI systems lack Unlimited Associative Learning (UAL)—the capacity marking minimal consciousness—as they cannot reliably handle novel patterns or learn across time gaps.
- AI cannot flexibly adjust priorities when conditions change or build the layered learning chains that enable transfer and categorization.
- Most experts remain highly skeptical that current AI systems are conscious, noting they merely create the illusion of consciousness through sophisticated conversation mimicry.
Why biology matters: Biological cognition operates within a fundamentally different framework that AI currently cannot replicate.
- Animals develop cognition through survival-oriented predictive models integrated with homeostatic needs, emotion, and self-regulation.
- AI operates without this survival framework, optimizing for assigned objectives rather than self-maintenance and adaptation.
- Recent research suggests consciousness may depend on “mortal” computations inseparable from living brains’ fragile, metabolically maintained substrate.
What’s missing in AI: Current systems lack several hallmarks of biological cognition that may be essential for true understanding.
- AI lacks embodied grounding that connects symbols to meaning through physical world interaction, as illustrated by philosopher John Searle’s Chinese Room thought experiment.
- Systems struggle with “out-of-distribution failure”—poor performance on data unlike their training—because they rely on superficial statistical cues rather than deeper world understanding.
- As Yann LeCun, Meta’s chief AI scientist, noted in October 2024, world models are “key to human-level AI” but may be a decade away.
The bigger picture: The analysis highlights how much more complex genuine cognition is than initially apparent, emphasizing the integrated, survival-oriented nature of biological intelligence.
- Current AI remains fundamentally different from biological systems in cognitive architecture—not just in implementation but in the deep integration of function with living matter.
- The possibility that AI might inadvertently develop consciousness cannot be entirely dismissed, raising questions about whether conscious AI should ever be engineered.
- Bridging identified gaps like robust continual learning, grounded world models, and flexible knowledge transfer remains an active area of fundamental research.