Experts are raising concerns about the growing trend of scammers using “deepfakes” of famous doctors on social media to promote fraudulent health products. These AI-generated videos mimic well-known health professionals like Dr. Hilary Jones, Dr. Rangan Chatterjee, and the late Dr. Michael Mosley, exploiting their trusted reputations to sell untested remedies for ailments such as high blood pressure and diabetes. Some videos have even falsely claimed that legitimate medications like metformin could be dangerous, urging users to buy alternative treatments instead.
The sophisticated technology behind deepfakes allows scammers to create highly convincing digital replicas of these doctors, making it difficult for viewers to distinguish between genuine health advice and fake claims. These videos are most commonly shared on platforms like Facebook and Instagram, targeting vulnerable populations, particularly older individuals, who may not be familiar with the latest advances in AI manipulation. The deception not only risks financial loss but also poses significant health risks to those who might opt for these unverified treatments instead of proper medical care.
AI experts, including Henry Ajder, note that deepfakes have become a powerful tool in the hands of fraudsters, especially with the rise of easily accessible AI tools for cloning voices and faces. Despite efforts by platforms like Meta (the parent company of Facebook and Instagram) to identify and remove such content, the problem persists. Experts warn that even when a fraudulent video is taken down, it often resurfaces quickly under a different name, creating a frustrating game of “cat and mouse.”
Medical professionals like Dr. Hilary Jones have voiced their frustration and concern over this misuse of their identities. In response, some have hired social media specialists to track and remove these deepfakes. However, as the technology becomes more advanced, the battle against these scams continues to intensify.
Authorities and AI experts are urging people to be vigilant and cautious, offering tips to spot deepfakes, such as looking for unnatural eye movements, discrepancies in facial expressions, or inconsistent audio-visual sync. In the face of this growing threat, it’s essential for consumers to verify the authenticity of health information and rely on trusted sources for medical advice


















Comments 5