In a troubling development, health misinformation is spreading on social media through deepfake videos that manipulate the images of real professionals. Among those impacted by these slashes is Professor David Taylor-Robinson, a health inequalities expert based at Liverpool University. His likeness now adorns sketchy wellness-guru-style videos promoting all those snake-oil health products. These amazing videos are produced by the US-based supplement company Wellness Nest and its sister company in the UK.
Renowned fact-checking non-profit Full Fact revealed hundreds of these misleading deepfake videos. They included impersonated versions of several prominent doctors and influencers, warning audiences and leading them to products like probiotics and Himalayan shilajit. Taylor-Robinson only learned of the improper use of his image after a former student and now colleague Graham Ambrose brought it to his attention.
In one truly mesmerizing video, a Taylor-Robinson clone discussed a potential menopause-related side effect. They nicknamed it “thermometer leg.” One such deepfake used video from a 2017 appearance of his at which he spoke, misappropriately touting menopause and pushing the same products. Many of these videos, aka medical misinformation, horrifically paraded impersonated docs. They mocked Black women and used ageist language and misogynistic comments while masquerading as discussed menopause.
The issues created by these deepfakes go beyond impersonating someone. They pose a dangerous threat to the public’s health and safety. Liberal Democrats have voiced their apprehensions and are calling for urgent action to eliminate AI-generated deepfakes that pose as medical professionals. We commend the party for recognizing the importance of promoting clinically approved tools to help prevent the misinformation spread by these fake videos.
Liberal Democrat health spokesperson Helen Morgan said it was “shocking” that the power of AI technology was being exploited in this way. She stated, “From fake doctors to bots that encourage suicide, AI is being used to prey on innocent people and exploit the widening cracks in our health system.” She highlighted the dangers of self-regulation. She asked why those who pretend to be medical doctors are treated as criminals and why digital impersonators are allowed to operate.
Deepfake fact-checker Leo Benedictus, who was part of the team that exposed these deepfakes, describes the tactic as “sinister and alarming.” He noted that when someone well-respected or with a large audience appears to endorse questionable supplements, it creates an environment where misinformation can thrive.
Although Prof Taylor-Robinson did this research himself, he is nonetheless understandably frustrated with the current state of affairs. He remarked, “Initially, they said some of the videos violated their guidelines but some were fine. That was absurd and weird because I was in all of them and they were all deepfakes. It was a faff to get them taken down.” He voiced his frustration on how others have been profiting off his work and the leading health misinformation that has followed.
TikTok acted after months of growing concerns, in part about the potential spread of deepfake videos. They admitted that they took down Taylor-Robinson’s content for impersonation or harmful misinformation, in violation of their community guidelines. A spokesperson from TikTok said, “Harmfully misleading AI-generated content is an industry-wide challenge, and we continue to invest in new ways to detect and remove content that violates our community guidelines.”
Unfortunately, even with all of these relatively new rules and legislation, stopping the spread of dangerous deepfake content is an uphill battle. Prof Taylor-Robinson expressed his concern over the consequences of this type of abuse. He stated, “I didn’t feel desperately violated, but I did become more and more irritated at the idea of people selling products off the back of my work and the health misinformation involved.”
We need to act on it more than ever before. That’s right—millions of today’s most vulnerable populations are at higher risk for health misinformation amplified by emerging technologies like deepfakes. As regulators and platforms grapple with the implications of artificial intelligence in this context, there is a pressing need for accountability and robust measures to protect public health.
