Artificial intelligence is enabling a new form of medical impersonation: fake videos of real doctors endorsing dubious supplements, unproven devices, and nonexistent treatments. The problem has grown so brazen that it prompted the American Medical Association to declare it a public health and safety crisis and demand federal action.
The flood of synthetic medical authority on social media poses multiple threats. Patients harmed by counterfeit products recommended by AI-generated physician likenesses could sue the actual doctors whose faces were stolen. Hospitals are discovering fabricated diagnostic images planted in their systems. And the erosion of physician credibility undermines a profession where trust itself is therapeutic currency.
CNN medical correspondent Sanjay Gupta became an unlikely poster child for the problem when deepfakes using his image to promote an alleged Alzheimer's breakthrough became sophisticated enough to fool his own acquaintances. "What was different this time around was just the quality of these ads," Gupta said. "This was really quite stunning."
The AMA's CEO John Whyte framed the issue as both widespread and underreported. "It's becoming more mainstream. Everyone knows someone who this has impacted," Whyte said. "It's probably occurring more than we hear because people are embarrassed by it."
Lawmakers are beginning to respond. California has already mandated disclosure labels on AI-generated ads and is advancing legislation to explicitly ban doctor deepfakes. Pennsylvania's medical board took action this week by ordering a technology company to stop its chatbot from impersonating a licensed physician.
Beyond individual impersonation lies a more insidious threat. Researchers at Mount Sinai's Icahn School of Medicine tested clinicians' ability to spot synthetic X-rays and other fabricated diagnostic images. The results were troubling: most radiologists failed to identify the fakes, and one-quarter missed them even after being warned to watch for telltale signs like unnatural soft tissue and artificially smooth bone surfaces.
Such manipulated images could be weaponized to defraud insurers, fuel litigation, or worse. "There is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos," said Mickael Tordjman, the study's lead author.
The AMA is now seeking guidance for targeted physicians on how to respond to deepfake attacks and how malpractice and cyber liability insurance can provide protection. The organization is also pressing tech platforms to move faster at removing impersonations and calling for modernized identity protection laws at the federal and state level.
Whyte summed up the fundamental problem: "We shouldn't have to make the public detectives to determine whether something's not a deep fake."
Author James Rodriguez: "This is exactly the kind of asymmetric threat that moves faster than regulation. By the time a law passes, the AI will already be three generations ahead."
Comments