Emergency medical services face a pivotal moment as artificial intelligence transforms everything from patient care to administrative workflows. At the California Ambulance Association Annual Conference in Monterey, six industry experts gathered for an unconventional panel discussion that revealed both the promise and perils of AI adoption in emergency healthcare.
The session, dubbed “Six Experts – One Weird AI Showdown,” featured a unique format: no sales pitches or product demonstrations, just rapid-fire insights delivered in two-minute bursts after panelists buzzed in to speak. The diverse group included Brendan Cameron from ABC, Christian Carrasquillo from Fast Medical AI, Dave O’Rielly from Traumasoft, Nidhish Dhru from Huly, Jonathan Feit from Beyond Lucid Technologies, and Mike Taigman from FirstWatch, a healthcare analytics company.
Despite the playful format, serious themes emerged about how emergency medical services—the ambulance crews, paramedics, and emergency medical technicians who provide pre-hospital care—should navigate AI adoption responsibly.
The panel’s most urgent message centered on organizational preparedness. Emergency medical service agencies must develop AI capabilities through internal expertise, skilled hiring, or trusted consultants who can evaluate solutions for their specific operational needs.
Without these safeguards, some organizations risk being left behind as competitors leverage AI for competitive advantages. Jonathan Feit emphasized this point by noting the recent creation of a federal Chief AI Officer role, highlighting how even government agencies recognize the need for specialized AI oversight.
The challenge extends beyond basic technical understanding. Risk and compliance managers in many EMS organizations lack the specialized knowledge to properly evaluate AI adoption strategies. As one panelist noted, this responsibility shouldn’t fall to “the best video game player who happens to be a medic.”
The most heated exchanges focused on whether AI would eliminate jobs in emergency medical services. Panelists offered contrasting perspectives on this fundamental concern.
Nidhish Dhru delivered a stark warning about manual tasks: “Anything you do manually today—scanning, attaching, pushing paper—that job is gone. Not today, maybe tomorrow, but gone.” This applies particularly to administrative roles involving billing, data entry, and document processing that characterize much of EMS operations.
However, other experts argued that chronic understaffing in emergency medical services means AI will likely reallocate human resources rather than eliminate positions entirely. The technology could free up personnel to focus on clinical care and patient interaction—areas where human judgment and empathy remain irreplaceable.
Brendan Cameron reframed the job threat: “You won’t lose your job to AI. You’ll lose it to someone who knows how to use AI.” This perspective suggests that AI literacy, rather than AI itself, represents the real competitive differentiator for individual careers and organizations.
When pressed for specific timelines, the experts agreed on a measured rollout across different timeframes. Minimal changes are expected within the next year as organizations focus on foundational planning and pilot programs.
The three-year horizon shows more promise for meaningful transformation. Administrative processes and continuous quality improvement—the systematic approach healthcare organizations use to enhance patient care and operational efficiency—may see significant AI augmentation during this period.
By the five-year mark, panelists predicted broader systemic changes throughout emergency medical services, though none forecasted the emergence of “superintelligence” or fully autonomous systems in emergency care.
Jonathan Feit offered a contrarian perspective on AI’s impact, suggesting that misuse of AI tools might actually increase EMS workload: “People are already ingesting things because ‘ChatGPT said so.’ That’s job security.” This refers to situations where patients follow AI-generated medical advice that leads to emergency situations requiring professional intervention.
The discussion identified specific areas where AI can provide immediate value to emergency medical services:
Clinical decision support for differential diagnosis involves AI systems that help emergency responders identify potential medical conditions by analyzing patient symptoms, vital signs, and medical history. This technology can suggest possible diagnoses for paramedics to consider, particularly valuable in complex cases or when treating unfamiliar conditions.
Revenue capture through intelligent billing uses AI to automatically identify billable services and ensure proper coding for insurance reimbursement. Emergency medical services often lose significant revenue due to incomplete or incorrect billing documentation, making this a high-impact application.
Data quality monitoring and gap identification employs AI to review patient care reports and operational data, flagging inconsistencies, missing information, or documentation errors that could impact patient care or regulatory compliance.
Process automation for repetitive administrative tasks eliminates manual work like scheduling, inventory management, and routine reporting, allowing staff to focus on patient care rather than paperwork.
Organizational AI governance councils provide structured oversight for AI adoption, ensuring implementations align with clinical standards, regulatory requirements, and organizational goals while managing associated risks.
Real-time patient-specific insights surface critical information like advance directives—legal documents specifying a patient’s healthcare preferences—or special needs alerts such as autism spectrum disorder considerations that help responders provide more appropriate care.
The panel identified cybersecurity as a more immediate threat than AI autonomy concerns. Mike Taigman dismissed fears about AI systems escaping human control, emphasizing that malicious actors using AI to breach healthcare data systems pose the real danger.
Christian Carrasquillo delivered a sobering warning about current practices: “HIPAA violations from AI aren’t an ‘if.’ They’re a ‘when’—and it’s probably already happened.” HIPAA, the Health Insurance Portability and Accountability Act, establishes strict privacy protections for patient health information.
Many healthcare providers are unknowingly copying patient data into free AI tools without realizing this information remains stored on external servers, potentially violating federal privacy laws. This practice creates significant legal and financial risks for emergency medical service organizations.
The legal framework surrounding AI liability remains underdeveloped, leaving unclear whether errors fall on technology vendors, individual medical professionals, or healthcare organizations. This uncertainty complicates risk management and insurance considerations for EMS agencies considering AI adoption.
The discussion revealed nuanced thinking about AI accuracy standards in emergency medical care. While billing mistakes can be corrected retroactively, clinical decisions involving life-and-death situations demand much higher reliability thresholds.
Jonathan Feit emphasized this critical distinction: “There are parts of our profession that have a zero margin for error. There is no option. You can’t kill them again.” This reality requires context-specific accuracy standards and mandatory human oversight for high-stakes clinical applications.
Interestingly, several panelists suggested that AI-generated patient care documentation might actually improve upon current standards. Mike Taigman noted that AI narratives are “unlikely to be worse” than existing EMS documentation, which often suffers from inconsistency and incomplete information.
The panel concluded by sharing their deepest concerns about AI adoption in emergency medical services. These worries extend beyond technical implementation to fundamental industry dynamics.
Several experts expressed concern about power concentration among major technology companies that control AI development and data access. This consolidation could limit healthcare organizations’ autonomy and increase dependence on external providers for critical operational functions.
Accountability dilution represents another significant risk, as individuals and organizations might defer responsibility to AI systems with phrases like “it was the AI.” This attitude could undermine the professional judgment and personal responsibility that define quality emergency medical care.
Premature adoption without proper expertise poses immediate dangers. Some EMS agencies are already using AI models to guide staffing decisions without understanding the technology’s limitations or potential biases, potentially compromising patient care and operational effectiveness.
The industry’s tendency to chase emerging technologies before mastering existing tools also drew criticism. As Brendan Cameron observed: “We all want the slam dunk with AI, but in EMS, we haven’t even learned to dribble, pass or make the layup with the tech we already have.”
The expert panel delivered clear guidance for emergency medical service organizations considering AI adoption. Rather than waiting for technology vendors to define possibilities, agencies should proactively develop internal capabilities and governance structures.
Organizations should invest in training quality assurance and continuous improvement staff to understand AI capabilities and limitations. Designating dedicated AI leadership ensures adoption proceeds safely, compliantly, and beneficially rather than haphazardly.
Policy development, staff training, validation procedures, and compliance frameworks cannot wait for AI technology to mature. These foundational elements require immediate attention as AI tools are already available and evolving rapidly, often outpacing regulatory frameworks and organizational comfort levels.
The panel’s consensus was clear: artificial intelligence will transform emergency medical services, but success depends on whether individual organizations and the broader industry steer that transformation proactively or simply react to changes imposed by external forces. The time for preparation is now, before AI adoption becomes a competitive necessity rather than a strategic choice.