×
Safecracking Cambridge researchers undermine artist anti-AI defenses with new tool
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

University of Cambridge researchers have developed LightShed, a proof-of-concept tool that can effectively strip away anti-AI protections from digital artwork, neutralizing defenses like Glaze and Nightshade that artists use to prevent their work from being scraped for AI training. The technology represents a significant escalation in the ongoing battle between artists seeking to protect their intellectual property and AI companies needing training data, potentially undermining the digital defenses that 7.5 million artists have downloaded to safeguard their work.

The big picture: LightShed demonstrates that current artist protection tools may provide only temporary security, as AI researchers can develop countermeasures that learn to identify and remove the digital “poison” these tools apply to artwork.

How it works: The tool operates by learning to identify the subtle pixel changes that protection tools make to artwork, then effectively “washing” them away.

  • Researchers trained LightShed using art samples with and without Nightshade, Glaze, and similar protections applied.
  • The system learns to reconstruct “just the poison on poisoned images,” identifying where digital defenses have been applied.
  • LightShed can even apply knowledge from one protection tool to defeat others it has never encountered before.

What these tools protect against: Current artist defenses work by making imperceptible changes to artwork that confuse AI models during training.

  • Glaze makes AI models misunderstand artistic style, causing them to interpret a photorealistic painting as a cartoon.
  • Nightshade makes models see subjects incorrectly, such as interpreting a cat in a drawing as a dog.
  • These “perturbations” push artwork over the boundaries that AI models use to categorize different types of images.

In plain English: Think of AI models as having invisible filing cabinets where they sort images into categories like “realistic painting” or “cartoon.” Artist protection tools slightly alter pixels to trick the AI into filing artwork in the wrong drawer, making it learn incorrect information about the art’s style or content.

Why this matters: The research exposes fundamental vulnerabilities in tools that millions of artists rely on for protection against unauthorized AI training.

  • Around 7.5 million people, many artists with small and medium-size followings, have downloaded Glaze to protect their work.
  • Artists face concerns that AI models will learn their style, mimic their work, and potentially eliminate their livelihoods.
  • The legal and regulatory landscape around AI training and copyright remains uncertain, making technical defenses particularly important.

What the researchers are saying: The LightShed team emphasizes they’re not trying to steal artists’ work but want to prevent false security assumptions.

  • “You will not be sure if companies have methods to delete these poisons but will never tell you,” says Hanna Foerster, the study’s lead author and PhD student at Cambridge.
  • “It might need a few more rounds of trying to come up with better ideas for protection,” Foerster added about the need for improved defenses.

The creators’ perspective: Original protection tool developers acknowledge the temporary nature of their solutions while defending their value.

  • Shawn Shan, who created both Glaze and Nightshade and was named MIT Technology Review’s Innovator of the Year, views these tools as deterrents rather than permanent solutions.
  • “It’s a deterrent,” says Shan, explaining the goal is creating enough roadblocks that AI companies find it easier to work directly with artists.
  • The Nightshade website warned the tool wasn’t future-proof before LightShed development even began.

What’s next: Researchers plan to use insights from LightShed to develop new artist defenses, including watermarks that could persist even after artwork passes through AI models.

  • The findings will be presented at the Usenix Security Symposium, a leading global cybersecurity conference, in August.
  • Foerster hopes to build defenses that could “tip the scales back in the artist’s favor once again.”
  • The research continues the cat-and-mouse game between artists and AI proponents across technology, law, and culture.
This tool strips away anti-AI protections from digital art

Recent News

Apple’s AI model detects health conditions with 92% accuracy using behavior data

Movement patterns and sleep habits prove more reliable than heart rate sensors.

Google tests Android 16 changes to remove AI shortcuts and restore colorful icons

Material 3's white weather icons are getting replaced after hurting visibility and usability.

AWS upgrades SageMaker with observability tools to boost AI development

New debugging tools solve GPU performance issues that previously took weeks to identify.