February 27, 2024

Taylor Daily Press

Complete News World

ChatGPT-4 regularly outperforms doctors in the diagnostic process – ICT and Health

ChatGPT-4 regularly outperforms doctors in the diagnostic process – ICT and Health

The study was based on a national survey in which more than 550 doctors used probabilistic reasoning in five medical cases. Their results were then compared to those of ChatGPT-4. The research specifically focused on ChatGPT-4's capabilities in estimating probabilities in the diagnostic process.

Diagnostic test results

In the medical field, doctors often face the challenge of determining the likelihood of a disease being present based on a patient's symptoms and the results of diagnostic tests. These estimates require probabilistic reasoning, as clinicians must decide how likely the diagnosis is. In practice, errors in this process can lead to overtreatment, unnecessary tests, and the use of medications.

doctor. Adam Rodman“People struggle with probabilistic thinking, which practically requires making decisions based on a calculation of probabilities,” explains the internist and researcher at BIDMC. Probabilistic reasoning is one of many components of diagnosis, an incredibly complex process that uses many cognitive strategies.

ChatGPT-4 has good reasons

To address this challenge, the BIDMC team explored the potential of ChatGPT-4 to support clinicians in making diagnostics. They used a national survey in which more than 550 doctors applied probabilistic reasoning to five medical conditions. These cases include patients with symptoms of pneumonia, breast cancer, coronary artery disease, and urinary tract infections.

The same medical conditions and associated symptoms were then submitted to ChatGPT-4. The model was asked to estimate the probability of diagnosis based on patient data. Next, the results of diagnostic tests, such as X-rays, mammograms, stress tests, and urine samples, are fed into the model. Based on that additional data, ChatGPT-4 then updated its estimates.

See also  The inevitable Obenda guides Belgian promises to 12 out of 12 in the preliminary round of the European Championship | European football under 21

More accurate analyses

The study shows that ChatGPT-4 was highly accurate in making a diagnosis when test results were negative. In these cases, the model consistently outperformed human doctors. However, with positive test results, ChatGPT-4's performance seemed variable. In some cases, the model was more accurate than doctors', while in other cases it usually achieved similar results.

doctor. Rodman notes that people sometimes feel that despite a negative test result, there is still a greater risk of contracting a certain disease. This can lead to overtreatment, more tests and too many medications. This is where ChatGPT-4 comes in handy, as the chatbot is able to provide more accurate estimates.

The research underscores the potential role of AI, such as ChatGPT-4, as a valuable clinical tool that can support clinicians, especially in situations where diagnostic tests yield negative results. The findings also provide food for thought about the future of healthcare and how AI can be integrated into medical practice.

Artificial intelligence capabilities are growing

Discussion about the possibility of using large language models such as ChatGPT to aid in diagnosis, prevention, and treatment is now taking place intensely internationally. In practice, AI appears to be both useful and fallible. However, doctors see increasing opportunities to use AI in various fields.

For example, an Israeli hospital recently became the first in the world to implement a special version of ChatGPT to help triage patients. The Mayo Clinic in America is busy with its MBA experience, including training doctors. There are also many other smart innovations and tests using AI at Mayo Clinic, such as incorporating AI into the analysis of electrocardiograms (ECGs). All of these examples underscore the great potential that AI can have in healthcare.

See also  A parasitic worm hijacks the praying mantis with its own genes

Artificial intelligence as a co-pilot

Despite AI's strong growth curve, it is still important to be aware of the risks that exist as well. For example, the lack of transparency and consistency with tools like ChatGPT is a known problem. According to the researchers at BIDM C, it is important to continue to approach this new technology critically and above all to see it as a supportive tool that does not explicitly aim to replace human experience. According to experts, AI can work well as a co-pilot, providing support services for diagnostics, among other things, but not taking over the joystick itself.

Better IC Health