About this item:

68 Views | 34 Downloads

Author Notes:

Samruddhi S. Kulkarni, sskulk25@gmail.com

Nasim Katebi, nkatebi@emory.edu

SK and NK performed all the experiments and contributed to design of the system. GC designed the experiments, and managed the project. SK, CV, and NK curated and labeled the data, and contributed input to experimental procedures. PR and GC designed the data collection. All authors wrote and edited the manuscript.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Subject:

Research Funding:

This work was part of a study approved by the Institutional Review Boards of Emory University, the Wuqu' Kawoq | Maya Health Alliance, and Agnes Scott College (Ref: Emory IRB00076231-Mobile Health Intervention to Improve Perinatal Continuum of Care in Guatemala) and registered as a clinical trial (ClinicalTrials.gov identifier NCT02348840).

Keywords:

  • blood pressure
  • convolutional neural network
  • digital transcription
  • hypertension
  • optical character recognition
  • preeclampsia

CNN-Based LCD Transcription of Blood Pressure From a Mobile Phone Camera

Tools:

Journal Title:

Frontiers in Artificial Intelligence

Volume:

Volume 4

Publisher:

, Pages 543176-543176

Type of Work:

Article | Final Publisher PDF

Abstract:

Routine blood pressure (BP) measurement in pregnancy is commonly performed using automated oscillometric devices. Since no wireless oscillometric BP device has been validated in preeclamptic populations, a simple approach for capturing readings from such devices is needed, especially in low-resource settings where transmission of BP data from the field to central locations is an important mechanism for triage. To this end, a total of 8192 BP readings were captured from the Liquid Crystal Display (LCD) screen of a standard Omron M7 self-inflating BP cuff using a cellphone camera. A cohort of 49 lay midwives captured these data from 1697 pregnant women carrying singletons between 6 weeks and 40 weeks gestational age in rural Guatemala during routine screening. Images exhibited a wide variability in their appearance due to variations in orientation and parallax; environmental factors such as lighting, shadows; and image acquisition factors such as motion blur and problems with focus. Images were independently labeled for readability and quality by three annotators (BP range: 34–203 mm Hg) and disagreements were resolved. Methods to preprocess and automatically segment the LCD images into diastolic BP, systolic BP and heart rate using a contour-based technique were developed. A deep convolutional neural network was then trained to convert the LCD images into numerical values using a multi-digit recognition approach. On readable low- and high-quality images, this proposed approach achieved a 91% classification accuracy and mean absolute error of 3.19 mm Hg for systolic BP and 91% accuracy and mean absolute error of 0.94 mm Hg for diastolic BP. These error values are within the FDA guidelines for BP monitoring when poor quality images are excluded. The performance of the proposed approach was shown to be greatly superior to state-of-the-art open-source tools (Tesseract and the Google Vision API). The algorithm was developed such that it could be deployed on a phone and work without connectivity to a network.

Copyright information:

© 2021 Kulkarni, Katebi, Valderrama, Rohloff and Clifford.

This is an Open Access work distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/rdf).
Export to EndNote