Skip to Main

ChatGPT can extract data from clinical notes

UT Southwestern study shows popular AI program can accurately analyze medical charts for clinical research, other applications

Abstract binary DNA structure with chemical symbol
(Photo credit: Getty Images)

DALLAS – May 08, 2024 – ChatGPT, the artificial intelligence (AI) chatbot designed to assist with language-based tasks, can effectively extract data for research purposes from physicians’ clinical notes, UT Southwestern Medical Center researchers report in a new study. Their findings, published in NPJ Digital Medicine, could significantly accelerate clinical research and lead to new innovations in computerized clinical decision-making aids.

“By transforming oceans of free-text health care data into structured knowledge, this work paves the way for leveraging artificial intelligence to derive insights, improve clinical decision-making, and ultimately enhance patient outcomes,” said study leader Yang Xie, Ph.D., Professor in the Peter O’Donnell Jr. School of Public Health and the Lyda Hill Department of Bioinformatics at UT Southwestern. Dr. Xie is also Associate Dean of Data Sciences at UT Southwestern Medical School, Director of the Quantitative Biomedical Research Center, and a member of the Harold C. Simmons Comprehensive Cancer Center.

Yang Xie, Ph.D.
Yang Xie, Ph.D., Professor in the Peter O'Donnell Jr. School of Public Health and the Lyda Hill Department of Bioinformatics at UT Southwestern, is Associate Dean of Data Sciences at UT Southwestern Medical School, Director of the Quantitative Biomedical Research Center, and a member of the Harold C. Simmons Comprehensive Cancer Center.

Much of the research in the Xie Lab focuses on developing and using data science and AI tools to improve biomedical research and health care. She and her colleagues wondered whether ChatGPT might speed the process of analyzing clinical notes – the memos physicians write to document patients’ visits, diagnoses, and statuses as part of their medical record – to find relevant data for clinical research and other uses. Clinical notes are a treasure trove of information, Dr. Xie explained; however, because they are written in free text, extracting structured data typically involves having a trained medical professional read and annotate them. This process requires a huge investment of time and often resources and can also introduce human bias. Existing programs that use natural language processing require extensive human annotation and model training. As a result, clinical notes are largely underused for research purposes.

To determine whether ChatGPT could convert clinical notes to structured data, Dr. Xie and her colleagues had it analyze more than 700 sets of pathology notes for lung cancer patients to find the major features of primary tumors, whether lymph nodes were involved, and the cancer stage and subtype. Overall, Dr. Xie said, the average accuracy of ChatGPT to make these determinations was 89%, based on reviews by human readers. Their analysis took several weeks of full-time work compared with the few days it took to fine-tune data extraction from the ChatGPT model. This accuracy was significantly better than other traditional natural language processing methods tested for this use.

To test whether this approach is applicable to other diseases, Dr. Xie and her colleagues used ChatGPT to extract information about cancer grade and margin status from 191 clinical notes on patients from Children’s Health with osteosarcoma, the most common type of bone cancer in children and adolescents. Here, ChatGPT returned information with nearly 99% accuracy on grade and 100% accuracy on margin status.

Dr. Xie noted that the results were strongly influenced by what prompts ChatGPT was given to perform each task – a phenomenon called prompt engineering. Providing multiple options to choose from, giving examples of appropriate responses, and directing ChatGPT to rely on evidence to draw conclusions improved its performance.

She added that using ChatGPT or other large language models to extract structured data from clinical notes could not only speed clinical research but also help clinical trial enrollment by matching patients’ information to clinical trial protocols. However, she said, ChatGPT won’t replace the need for human physicians.

“Even though this technology is an extremely promising way to save time and effort, we should always use it with caution. Rigorous and continuous evaluation is very important,” Dr. Xie said.

Other UTSW researchers who contributed to this study include first author Jingwei Huang, Ph.D., Data Scientist; Donghan “Mo” Yang, Ph.D., Assistant Professor in the O’Donnell School of Public Health and Director of the Biostatistics and Data Science Core; Ruichen Rong, Ph.D., Assistant Professor in the O’Donnell School of Public Health; Zhikai Chi, M.D., Ph.D., Assistant Professor of Pathology; Laura J. Klesse, M.D., Ph.D., Associate Professor of Pediatrics and Neurological Surgery; Guanghua Xiao, Ph.D., Professor in the O’Donnell School of Public Health, of Biomedical Engineering, and in the Lyda Hill Department of Bioinformatics; Eric D. Peterson, M.D., M.P.H., Professor of Internal Medicine and in the O’Donnell School of Public Health, Vice Provost, and Senior Associate Dean for Clinical Research; Xiaowei Zhan, Ph.D., Associate Professor in the O’Donnell School of Public Health and in the Center for the Genetics of Host Defense; Xian Cheng, Ph.D., Senior Research Associate; Yuija Guo, M.S., Biostatistical Consultant; and postdoctoral researchers Kuroush Nezafati, M.D., and Colin Treager, M.D.

Drs. Chi, Klesse, Xiao, and Zhan are also members of the Simmons Cancer Center.

Dr. Xie holds the Raymond D. and Patsy R. Nasher Distinguished Chair in Cancer Research, in Honor of Eugene P. Frenkel, M.D. Dr. Kleese is a Dedman Family Scholar in Clinical Care. Dr. Xiao holds the Mary Dees McDermott Hicks Chair in Medical Science. Dr. Peterson holds the Adelyn and Edmund M. Hoffman Distinguished Chair in Medical Science.

This study was funded by grants from the National Institutes of Health (P50CA70907, P30CA142543, 1R35GM136375, 1R01GM140012, 1R01GM141519, 1R01DE030656, 1U01CA249245, and U01AI169298) and the Cancer Prevention and Research Institute of Texas (RP230330 and RP180805). 

About UT Southwestern Medical Center  

UT Southwestern, one of the nation’s premier academic medical centers, integrates pioneering biomedical research with exceptional clinical care and education. The institution’s faculty members have received six Nobel Prizes and include 25 members of the National Academy of Sciences, 21 members of the National Academy of Medicine, and 13 Howard Hughes Medical Institute Investigators. The full-time faculty of more than 3,100 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide care in more than 80 specialties to more than 120,000 hospitalized patients, more than 360,000 emergency room cases, and oversee nearly 5 million outpatient visits a year.