07/08/2019 | News release | Distributed by Public on 07/08/2019 08:30
Electronic health records (EHRs) have become the chief tool for documenting a patient's medical care in today's changing health care landscape. Still, many physicians are reluctant to use this technology and when they do, they often leave behind an incomplete and incorrect file, primarily because the task of filling in the information is time-consuming and diverts attention from direct patient care. According to one study, 'Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties,' 'during the office day, physicians spent … 49.2 percent of their time on EHR and desk work.'
Yi Chen, a professor and the Henry J. Leir Chair in Healthcare at Martin Tuchman School of Management, is using advanced machine-learning techniques to make the documentation process easier and more precise for these resisters - which in turn will free up time to spend with their patients and develop more targeted treatment plans. This work is in collaboration with the University of Maryland and Inovalon, a technology company empowering data-driven health care analytics with access to patient data from hospitals and insurance companies, which has made thousands of diverse patient medical records available to Chen for her research.
She describes the data as very 'noisy,' meaning it is corrupted or distorted, since the fields in the medical records are either wrong or not filled in, including diagnosis codes in particular. Rather than complete a structured medical record, many physicians choose to document patient information just in clinical notes, which is simpler and how they've traditionally practiced. Of the codes, Chen said, 'That's the most precise and accurate description about the patient's condition. Insurance companies need the code to determine if the treatment is appropriate for the disease.'
Industry makes a large expenditure every year hiring people trained to review medical records and confirm or input the right disease codes based on the physicians' clinical notes. Chen has developed an algorithm for machines to do this job, by leveraging advanced artificial-intelligence (AI) models for natural-language processing (NLP). The algorithm, which identifies specific diseases that a patient has through NLP on clinical notes, builds upon and combines the advantages of two deep-learning models: Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM). CNN can learn to automatically recognize and rank in importance individual features of input data but ignores the sequential-order relationships among this data, while LSTM considers these relationships but disregards whether certain features are more important than others. Such deep learning enables machines to read clinical notes and assign codes.
While AI-powered NLP will never entirely replace humans, it can save a lot of labor, said Chen of the human-machine balance. 'We can use machines to screen the data and ask human experts to check only the challenging cases. At the same time, after human experts provide feedback on those challenging cases, the machine can learn from the feedback to continuously improve the model.'
Chen and her team are measuring results according to two matrices: precision (the fraction of machine-labeled correct codes over all codes labeled by a machine) and recall (the fraction of correct codes labeled by a machine over the total amount of correct codes). They have applied the algorithm to 2,000 clinical charts to date, with a precision of .78 and a recall of .60. These figures, notes Chen, represent significant improvement over support vector machines, a state-of-the-art learning algorithm, which realized a precision of .62 and a recall of .55. Support vector machines rely on predefined data features and were therefore unable to fully understand and classify disease mentions for coding.
By ensuring codes are entered and precise, Chen's machine algorithm also has the potential for making AI-based recommendations for care to the doctor, functioning as a type of physician assistant.
'All this depends on if we have enough facts in our knowledge base, based on complete and accurate medical records,' offered Chen. 'We don't think computers can replace doctors, but we do think computers can help doctors to make diagnoses and find the right treatment plan by learning from past evidence. We can develop AI technologies to identify similar patients and see if their diagnosis and treatment are applicable to the target patient.'
Chen speculates that machine learning will also be used in the future to read charts and clinical notes in real time to provide treatment suggestions to the physician during office visits, thereby improving both interactions with patients and outcomes. Toward that goal, she points as well to the possibility of teaching machines to populate EHRs, which require an overwhelming number of categories to fill in. Chen thinks her algorithm will be able to serve this purpose, but notes the first step will be to convert physicians' clinical narratives into computer code.
'If we can semi-populate these records, or make them easier to populate, that will be a huge help for physicians and enable them to spend less time in front of the screen and even more time with their patients,' she said.
Chen is also studying the use of advanced machine-learning technologies to improve patient decision-making. She has employed AI technologies to review user-generated content on online health forums and, through NLP, differentiate between experience and hearsay when it comes to treatment. Supported by a grant from The Leir Charitable Foundations, the 'very ambitious, long-term project' is especially challenging, because there are more variables in consumer-health dialogue than in medical dialogue.
The goal is to turn the experiential content into recommendations for care for patients with similar health concerns, and work with health-forum owners to present this advice to forum users. In this way, patients can become more informed in their decision-making.
'It's all for better health outcomes,' Chen remarked of her research, which also involves serving as NJIT's principal investigator for a National Institutes of Health Clinical and Translational Science Award grant given to the university partnership of Rutgers, NJIT and Princeton, and as the principal investigator of a research grant from the Leir Foundations.
'The first project is to really help physicians make the best treatment choice in the long run. … But patients have to be engaged,' she added, referencing the forum investigation. 'We want to better understand them - what their concerns are, what they care about, and why they do or don't comply with treatment.'