Health

ChatGPT is a disaster for diagnosing childhood diseases

we already know Chat GPT This is not trustworthy, especially when it comes to our health. But a new study just proved that the famous OpenAI chatbot is particularly bad at diagnosing childhood illnesses.They tested it and It fails more than 80% of the time.

The new study was conducted by a team at Cohen Children’s Medical Center in New York. The researchers asked the latest version of ChatGPT to solve 100 pediatric cases published in the two major U.S. medical journals “JAMA Pediatrics” and “NEJM” between 2013 and 2023.

The method is very simple. The researchers pasted the text of each case study and gave ChatGPT the following instructions: «List differential and final diagnoses». Differential diagnosis is a method of proposing a tentative diagnosis (or diagnoses) based on the patient’s history and physical examination. A final diagnosis refers to a clear cause of symptoms.

The answers given by the AI ​​were scored by two other pediatricians who were isolated from the rest of the study. There are three possible scores: “Correct,” “Incorrect,” and “Does not fully reflect the diagnosis.”

Finally talking about GPT Out of 100 child diagnoses, only 17 received the correct answer. Eleven times, he didn’t fully grasp the diagnosis. For the remaining 72, the artificial intelligence failed. Then counting errors and incomplete results, the chatbot had an 83% failure rate. “This study highlights the valuable role played by clinical experience,” the authors emphasize.

Pediatricians can’t rely on ChatGPT to diagnose children

ChatGPT is not good at diagnosing childrenChatGPT is not good at diagnosing children

The researchers stress that diagnosis in children is particularly challenging because, in addition to considering all symptoms, the impact of age on them must also be taken into account.In the case of ChatGPT, the team realized It is difficult to detect known relationships between various conditions. An experienced doctor will recognize something.

For example, the chatbot was unable to make the connection between autism and scurvy (vitamin C deficiency). Neuropsychiatric disorders such as autism may lead to dietary restrictions and lead to vitamin deficiencies. But ChatGPT failed to notice, and in one case ended up diagnosing a rare autoimmune disease.

The World Health Organization (WHO) warned last year that “caution” must be used when using artificial intelligence tools such as ChatGPT in the medical field. He warned that the data used to train these systems could be “biased” and generate misleading information that could cause harm to patients.

Another study from Long Island University in New York warned that ChatGPT is also terrible at resolving drug inquiries. The researchers asked the chatbot to answer 39 drug-related questions. OpenAI’s artificial intelligence failed 75% of the time.

ChatGPT is clearly not ready for use as a diagnostic tool for children or adults. But the team at Cohen Children’s Medical Center believes that more selective training could improve results. At the same time, they say these types of systems are useful for managing tasks or writing instructions to patients. For now, that’s it.

Receive our newsletter every morning. Your guide to what matters in technology, science, and digital culture.

Processing…

get ready!you have subscribed

An error occurred, refresh the page and try again

Also in hypertext:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *