×
Iffy ethics as eufy pays users $40 to film fake package thefts for AI training
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Anker’s camera brand eufy paid users up to $40 per camera to submit footage of package theft and car break-ins to help train its AI detection systems in late 2024. When users lacked real criminal activity to film, eufy explicitly encouraged them to stage fake thefts, suggesting they position themselves to be captured by multiple cameras simultaneously for maximum efficiency.

Why this matters: The approach highlights the creative—and potentially problematic—methods companies use to gather training data for AI systems, raising questions about whether synthetic data can effectively replace authentic criminal behavior patterns.

How the program worked: Users could earn $2 for each approved video clip showing package theft or attempted car break-ins, with a maximum of 10 videos per criminal activity type per camera.

  • The company solicited these “donations” through its community forums as part of efforts to improve AI recognition of suspicious behavior.
  • When real crime footage wasn’t available, eufy actively encouraged users to fake criminal acts, stating: “Don’t worry, you can even create events by pretending to be a thief and donate those events.”

The technical rationale: Machine learning systems focus on visual patterns rather than intent, making staged criminal behavior theoretically equivalent to authentic footage for training purposes.

  • Eufy suggested users could “complete this quickly” by having “one act captured by your two outdoor cameras simultaneously, making it efficient and easy.”
  • The approach reflects AI’s pattern-matching nature—these systems excel at recognizing visual similarities but don’t truly understand the difference between real and staged criminal behavior.

Potential concerns: While the crowdsourcing method appears cost-effective, questions remain about whether systems trained on authentic footage might perform better or produce fewer false positives.

  • The reliance on staged scenarios could potentially impact the AI’s ability to accurately detect real criminal behavior in varied real-world conditions.
  • The effectiveness of this training approach remains unclear, with the company’s AI improvements over the past six months yet to be fully evaluated by users.
Not enough package thieves to train your AI? Just pay users to act it out

Recent News

Private credit firms pivot to $29B AI data centers amid capital struggles

Asset managers sit on mountains of uncommitted cash as banks reclaim corporate lending dominance.

Long Beach offers free 90-minute AI workshops with cybersecurity focus

Spanish, Khmer, and Tagalog interpretation ensures the digital divide doesn't widen further.

NEURA Robotics acquires ek robotics to expand global mobile automation

The deal adds 300 employees and six decades of driverless transport expertise.