Validation of natural language processing to extract breast cancer pathology procedures and results

Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of ma...

Full description

Saved in:
Bibliographic Details
Main Authors: Arika E Wieneke (Author), Erin J. A. Bowles (Author), David Cronkite (Author), Karen J Wernli (Author), Hongyuan Gao (Author), David Carrell (Author), Diana S. M. Buist (Author)
Format: Book
Published: Elsevier, 2015-01-01T00:00:00Z.
Subjects:
Online Access:Connect to this object online.
Tags: Add Tag
No Tags, Be the first to tag this record!

MARC

LEADER 00000 am a22000003u 4500
001 doaj_8c1744d4824f43fdaa91b13cc6b5738e
042 |a dc 
100 1 0 |a Arika E Wieneke  |e author 
700 1 0 |a Erin J. A. Bowles  |e author 
700 1 0 |a David Cronkite  |e author 
700 1 0 |a Karen J Wernli  |e author 
700 1 0 |a Hongyuan Gao  |e author 
700 1 0 |a David Carrell  |e author 
700 1 0 |a Diana S. M. Buist  |e author 
245 0 0 |a Validation of natural language processing to extract breast cancer pathology procedures and results 
260 |b Elsevier,   |c 2015-01-01T00:00:00Z. 
500 |a 2153-3539 
500 |a 10.4103/2153-3539.159215 
520 |a Background: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. Methods: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%), and evaluation (324, 10%) purposes using manually reviewed pathology data as our gold standard. Results: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related). Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity), but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. Conclusions: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance. 
546 |a EN 
690 |a Breast cancer, natural language processing, pathology, validation 
690 |a Computer applications to medicine. Medical informatics 
690 |a R858-859.7 
690 |a Pathology 
690 |a RB1-214 
655 7 |a article  |2 local 
786 0 |n Journal of Pathology Informatics, Vol 6, Iss 1, Pp 38-38 (2015) 
787 0 |n http://www.jpathinformatics.org/article.asp?issn=2153-3539;year=2015;volume=6;issue=1;spage=38;epage=38;aulast=Wieneke 
787 0 |n https://doaj.org/toc/2153-3539 
856 4 1 |u https://doaj.org/article/8c1744d4824f43fdaa91b13cc6b5738e  |z Connect to this object online.