Connect with us

Health

Man Hospitalized After Following AI Advice on Salt Substitution

Editorial

Published

on

A 60-year-old man found himself in the emergency room after seeking advice from ChatGPT on how to replace table salt. His attempt to eliminate sodium chloride led to hallucinations and paranoia, prompting medical intervention. This incident highlights the potential risks associated with relying on artificial intelligence for health-related guidance.

In a case report published in March 2024 in the Annals of Internal Medicine, doctors from the University of Washington detailed the man’s ordeal. They emphasized that while AI tools can provide information, they are not always reliable in medical contexts. The authors, including Audrey Eichenberger, Stephen Thielke, and Adam Van Buskirk, stated, “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.”

The patient’s journey began when he visited an emergency room, convinced that his neighbor was poisoning him. Over the first 24 hours of his hospital stay, he exhibited increasing paranoia along with visual and auditory hallucinations, which ultimately led to his involuntary admission to a psychiatric unit. Once stabilized, he disclosed to medical staff that he had been experimenting with his diet after researching the health risks associated with sodium chloride.

Instead of following standard advice to reduce sodium intake, the man sought to eliminate chloride from his diet entirely. He turned to ChatGPT for recommendations on suitable substitutes. The AI suggested using sodium bromide, a chemical typically utilized in water treatment and veterinary medicine. This substance, which resembles table salt, is not safe for human consumption.

“Unfortunately, we do not have access to his ChatGPT conversation log, and we will never be able to know with certainty what exactly the output he received was,” the authors noted. They further explained that when they consulted the same AI model, it also mentioned bromide as a potential replacement for chloride, but failed to provide any health warnings or context.

As a result of replacing sodium chloride with sodium bromide over three months, the patient developed symptoms consistent with bromism, a condition arising from excessive bromide consumption. His bromide levels were measured at an alarming 1,700 mg/L, far exceeding the normal range of 0.9 to 7.3 mg/L.

During his three-week hospital stay, doctors monitored his electrolytes and addressed deficiencies in important nutrients such as vitamin C and B12. The incident raises concerns about bromism’s resurgence, as it had largely diminished since the U.S. Food and Drug Administration banned bromide in the 1980s. Researchers noted that bromide is now found in some unregulated dietary supplements, which could contribute to the condition’s reappearance.

The authors concluded that while AI tools have the potential to bridge gaps in health communication, they also pose significant risks, particularly when users lack the necessary context to interpret the information accurately. They stated, “As the use of AI tools increases, providers will need to consider this when screening for where their patients are consuming health information.”

In response to this incident, OpenAI, the company behind ChatGPT, reiterated that their platform is not intended for medical advice. They affirmed that their terms explicitly state ChatGPT should not be used for treating health conditions, and they are implementing measures to encourage users to consult healthcare professionals for guidance.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.