Skip to Main Content

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android



  • Currently, most of the data in electronic health records reside in free-text documentation—often unstructured—that is useless for artificial intelligence (AI) training without preprocessing.

  • Natural language processing (NLP) plays the dual function in health care AI of unlocking meaning from free-text and other unstructured documentation while also advancing and improving the creation of clinical documentation in the first place.

  • Regarding the capture of data, NLP-based applications (eg, computerized voice recognition, clinical documentation improvement, ambient voice assistants, and ambient virtual scribes) have proven their ability to decrease the burden of clinicians in producing clinical documentation.

  • Regarding the curation of data on the back end, NLP provides the mechanism that transforms clinical data from an amorphous and unhelpful state to a form that makes deep insight and explainable AI possible. These NLP use cases include data mining research, computer-assisted coding, automated registry reporting, clinical trial matching, prior authorization, clinical decision support, risk adjustment and hierarchical condition category (HCC) coding, computational phenotyping and biomarker discovery, and population surveillance.


Natural language processing (NLP) is a subfield of artificial intelligence (AI) and machine learning (ML) that concentrates on the capture, interpretation, and manipulation of human-generated spoken or written data (see Chapter 5). Most surgeons are already well exposed to the pervasiveness of NLP in their nonclinical daily lives, whether it is using Google Maps to request driving directions, using autocorrect while composing text messages, selecting automatic email responses suggested by Microsoft Outlook365, using Apple’s Siri to set a calendar appointment, or asking Amazon’s Echo to play songs from a favorite recording artist. In each of these scenarios, the pathway is essentially the same: first, the human’s spoken voice or keystrokes are captured electronically. Then those signals are processed, using a combination of probabilistic and deep-learning models to predict an initial output of recognizable words and phrases. Then these data drive further machine predictions (eg, “this is a request for information” or “this is a command”), which ultimately drive an action (eg, “display directions to 55 Fruit Street in Boston” or “play the next song in the Bon Jovi library”).

In health care, NLP is increasingly the critical front end that captures and curates information necessary to power training, insights, predictions, and executable output from AI. Regarding the capture of data, NLP-based applications, such as computerized voice recognition, voice assistants, and virtual scribes, have already proven their ability to decrease the burden of clinicians in producing clinical documentation. Many surgeons today, inundated by myriad reporting, billing, and documentation requirements, are either burning out under the administrative burden of typing and dictating1 or have bypassed the documentation process almost entirely by implementing boilerplate templates (that make every case and every patient appear to be a clone of the next) or by hiring expensive nurse practitioners or other staff merely to delegate the job of documentation. However, by decreasing time and effort required to document ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.