How can we use AI to improve our own health behavior? - opinion

Artificial intelligence can increase the possibility of false medical information and conspiracy theories being developed and distributed.

 THE WRITER speaks at a Democracy Day event at Reichman University, last year. Artificial intelligence increases the possibility of false medical information and conspiracy theories being developed and distributed, she cautions.  (photo credit: Courtesy, Erga Atad)
THE WRITER speaks at a Democracy Day event at Reichman University, last year. Artificial intelligence increases the possibility of false medical information and conspiracy theories being developed and distributed, she cautions.
(photo credit: Courtesy, Erga Atad)

Senior AI figures have recently warned that artificial intelligence (AI) poses an existential threat to humanity. This warning highlights the relationship between artificial intelligence and human intelligence. This relationship is particularly relevant in the field of persuasion, message design, and their influence on health behavior. Today, it is already possible to generate health recommendations and messages tailored to the individual’s health profile.

AI, as the star of the COVID-19 crisis, facilitates the change from reactive to proactive medicine and improves decision-making processes. However, artificial intelligence is not immune from errors and biases, including ethnic and gender biases. Although it serves as an auxiliary tool for medical teams, it cannot replace human intelligence.

The use of artificial intelligence (AI) as a source of medical information, however, may create an uncontrolled flood of information that can include both intentional and unintentional errors. In other words, artificial intelligence increases the possibility of false medical information and conspiracy theories being developed and distributed. Our study with Dr. Itamar Netzer, Dr. Karen Landsman, and other colleagues from Midaat and the Technion revealed that belief in conspiracy theories affected hesitancy toward COVID-19 vaccination among parents.

Due to the limited attention and inability to trace the sources of information used in artificial intelligence, this danger is significant. This is in addition to the shift from active to passive information search and processing processes.

Limitations of AI in medicine

Considering this, we need to ask ourselves: How can we develop and encourage critical thinking in the era of artificial intelligence? Can artificial intelligence identify and deal with false medical information by itself? How can we balance the quantity of information with its quality?

 Artificial intelligence vs. humanity. (credit: WALLPAPER FLARE)
Artificial intelligence vs. humanity. (credit: WALLPAPER FLARE)

A possible solution might be to improve interpersonal communication among healthcare teams. Exposure to both reliable and false medical information may encourage more consultation with medical teams.

In Israel, for instance, we recently saw an outbreak of whooping cough among babies and adults, particularly in the ultra-Orthodox sector. To handle the outbreak, Sari Natan – PhD in Biology and cofounder of Midaat – believes nurses need tools to improve interpersonal communication to deliver complex medical information in simple language.

Nurses in the community health clinic play an important role in providing information and guidance to new parents, as well as raising awareness about the social responsibility of adults to vaccinate to protect babies. According to a study I conducted with Dr. Jonathan Cohen, certain changes in body language, such as direct eye contact, may evoke a sense of credibility in what we convey.

Moreover, as we have learned from TikTok challenges like the “roll-up,” emotions and sensory experience are important components in persuasion and message design. Accordingly, a recent study compared the response of artificial intelligence (AI) to family doctors regarding medical questions. Compared to the response of a family doctor, the AI response evoked a higher sense of empathy and satisfaction with the quality of medical information.

Advertisement

Empathy created by AI could enhance digital transmission of reliable messages and foster resilience among patients. Yet, can AI empathy be an advantage or a hindrance to discriminating between trustworthy and false information, especially when the latter tends to evoke negative emotions?

In conclusion, the relationship between artificial intelligence and human intelligence raises many questions. AI helps shape persuasive messages and health recommendations, resulting in better decision-making. However, human judgment and interpersonal communication are crucial in dealing with complex medical information and exceptional cases. There is also a question about the use of positive emotions, such as empathy, to deliver messages and distinguish between trustworthy and false information.

The writer is a lecturer, researcher, and communication consultant specializing in persuasion, Reichman University, and a member of Midaat, an NGO for a healthier Israeli society.