WHO: Caution in the use of apps like ChatGPT in healthcare

The World Health Organization (WHO) has called for caution in the use of applications such as ChatGPT in the health field

1 min read
WHO: Caution in the use of apps like ChatGPT in healthcare

In a written statement made by WHO, it was pointed out that the use of artificial intelligence in the field of health, such as disease diagnosis, has become widespread.

In the statement, it was emphasized that Large Language Models (LLM) such as ChatGPT and Google Bard should be used with caution as they not only help diagnose the disease in environments with inadequate conditions but also carry risks.

The statement emphasized that WHO is “enthusiastic” about the use of LLM to contribute to health efforts, but noted that the rapid adoption of untested systems can lead to errors and harm patients.

The statement said there were concerns that LLM was not being applied consistently and that this practice could also erode trust in artificial intelligence.

The statement pointed out the need for diligence to use these technologies in safe, effective and ethical ways and warned that the data used to feed artificial intelligence could produce misleading or inaccurate information.

The statement emphasized that LLM can be misused to produce and disseminate highly persuasive disinformation in the form of text, audio or video content that is difficult for the public to distinguish from reliable health information.

FİKRİKADİM

The ancient idea tries to provide the most accurate information to its readers in all the content it publishes.