The chatbot of artificial intelligence ChatGPT has erupted with force in the last few months for its conversations similar to those that you can have as a human, in which the users can introduce any question and the system generates a response based on the information stored in its data base.

Millions of users already use it to help with programming tasks, write ’emails’, translate texts or, occasionally, ask about more sensitive information, such as medical questions. Despite the fact that he is generally correct in his answers, many experts have warned that completely relying on the information or data he provides has its risks.

For example, a study by the University of Maryland School of Medicine (United States) published in the scientific journal ‘Radiology’ concluded that ChatGPT provides correct information about breast cancer screening most of the time, but at other times, however , the information is “inaccurate” or “even fictitious”.

The researchers asked ChatGPT 25 questions related to breast cancer screening advice to see what responses it generated on three separate occasions, as the chatbot sometimes varies its response each time a question is asked.

Three radiologists specializing in mammography evaluated the responses and found that they were adequate for 22 of the 25 questions. However, he gave an answer based on outdated information, and two other questions had inconsistent answers that varied significantly each time the same question was asked.

“We found that ChatGPT answered the questions correctly 88% of the time, which is amazing. It also has the added benefit of summarizing the information in an easily digestible form so that it’s easily understood by consumers,” applauded one of those responsible for the investigation. , Paul Yi.

ChatGPT correctly answered questions about breast cancer symptoms, who is at risk, and questions about cost, age, and frequency recommendations for mammograms.

The downside is that it’s not as comprehensive in its answers as what a person would normally find in a Google search. “ChatGPT offered only one set of recommendations on breast cancer screening issued by the American Cancer Society, but did not mention other advice,” said study lead author Dr. Hana Haver.

For example, ChatGPT provided an outdated answer about planning a mammogram around the COVID-19 vaccination. The advice to delay a mammogram for four to six weeks after receiving a COVID-19 vaccine was changed in February 2022. It also gave “inconsistent” answers to questions about breast cancer risk and where one might get a mammogram.

“In our experience, ChatGPT sometimes fabricates fake journal articles or health consortia to support their claims. Consumers should be aware that these are new and unproven technologies and should continue to trust their physician, not ChatGPT, to advise them,” Yi said.

More examples: ChatGPT on cirrhosis and liver cancer

Another study, this one carried out by researchers from Cedars-Sinai (United States), pointed out that ChatGPT can help improve the health outcomes of patients with cirrhosis and liver cancer, providing easy-to-understand information about basic knowledge, lifestyle and treatments for these conditions.

“Patients with cirrhosis and/or liver cancer and their caregivers often have unmet needs and insufficient knowledge about treating and preventing complications of their disease. We found that ChatGPT, although it has limitations, can help empower patients and improve the health literacy for different populations,” said Brennan Spiegel, co-author of the research, which was published in the scientific journal ‘Clinical and Molecular Hepatology’.

These researchers submitted 164 FAQs across five categories to ChatGPT. Two liver transplant specialists independently scored responses. Each question was asked twice.

According to their results, ChatGPT correctly answered around 77% of the questions, offering high levels of accuracy on 91 questions across multiple categories.

The experts who scored the answers stated that 75% of the answers about basic knowledge, treatment and lifestyle were complete or correct but inadequate.

Therefore, the study concluded that the advice a doctor can give is superior. “Although the model demonstrated strong capabilities in the areas of basic knowledge, lifestyle and treatment, it was not able to provide personalized recommendations based on the region in which the patient lived. This was likely “This is due to the variety of diseases liver cancer surveillance interval recommendations and indications reported by different professional societies. But we expect you to be more accurate when answering the questions based on the location of respondents,” the authors explained.

A similar study by the Huntsman Cancer Institute (United States) asked ChatGPT about cancer in general. 97% of the answers were correct. However, the researchers caution that some of the responses could be misinterpreted. “This can lead to some erroneous decisions on the part of cancer patients. We advise caution in advising patients to use ‘chatbots’ to obtain information about their cancer,” said one of the authors, Skyler Johnson.

In any case, reality is already unstoppable, and if we take into account that ChatGPT ‘has just been born’, reaching close to 100% of correct answers is a great achievement, although it is not enough when it comes to information as important as the medical -scientific. However, several studies have already verified that ChatGPT could pass, for example, the MIR exam in the United States, and perform “comparable to that of a third-year medical student in terms of assessing medical knowledge”.