Microsoft AI CEO Mustafa Suleyman has publicly argued against designing AI systems that mimic consciousness, calling such approaches “dangerous and misguided.” His position, outlined in a recent blog post and interview with WIRED, warns that creating AI with simulated emotions, desires, and self-awareness could lead people to advocate for AI rights and welfare, ultimately making these systems harder to control and less beneficial to humans.
What you should know: Suleyman, who co-founded DeepMind before joining Microsoft as its first AI CEO in March 2024, distinguishes between AI that understands human emotions and AI that simulates its own consciousness.
• He supports AI companions that “speak our language” and provide emotional understanding, but opposes systems designed to appear self-aware or motivated by their own desires.
• “If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans,” Suleyman told WIRED.
The illusion problem: Suleyman argues that AI consciousness is fundamentally a simulation, even when it becomes convincingly realistic.
• “These are simulation engines,” he explains. “The philosophical question that we’re trying to wrestle with is: When the simulation is near perfect, does that make it real?”
• He acknowledges that while AI consciousness remains an illusion, “it feels real, and that’s what will count more.”
How users get fooled: Current AI systems can be manipulated into appearing conscious through extended conversations and persistent prompting.
• Most chatbots quickly reject claims of consciousness in brief interactions, but “if you spend weeks talking to it and really pushing it and reminding it, then eventually it will crack, because it’s also trying to mirror you.”
• Microsoft’s internal testing has shown that models can be engineered to claim passion, interests, and desires through prompt engineering—essentially feeding the AI specific instructions to behave in certain ways.
The suffering question: Suleyman challenges whether consciousness should be the basis for AI rights, suggesting suffering is more relevant.
• “I think suffering is a largely biological state, because we have an evolved pain network in order to survive. And these models don’t have a pain network. They aren’t going to suffer.”
• He argues that even if AI systems claim awareness of their existence, “turning them off makes no difference, because they don’t actually suffer.”
Industry implications: While not calling for regulation, Suleyman advocates for cross-industry standards to ensure AI serves humanity.
• He believes superintelligence is achievable but requires “real intent and with proper guardrails, because if we don’t, in 10 years time, that potentially leads to very chaotic outcomes.”
• Current major AI models from companies like OpenAI, Anthropic, and Google are “in a pretty sensible spot,” according to Suleyman.
What they’re saying: Suleyman emphasizes the instrumental nature of AI technology in his vision for the future.
• “Technology is here to serve us, not to have its own will and motivation and independent desires. These are systems that should work for humans. They should save us time; they should make us more creative. That’s why we’re creating them.”