DistancePPG: Robust non-contact vital signs monitoring using a camera

Mayank Kumar, Ashok Veeraraghavan and Ashu Sabharwal

6100 Main Street, Rice University, Houston, Tx, 77005, USA

The Problem

Measuring and monitoring any patient's vital signs is essential for their care- in fact, all care first begins by collecting vital signs like heart rate and blood pressure. The current standard of care is based on monitoring devices that require contact - electrocardiograms, pulse-oximeter, blood pressure cuffs, and chest straps. However, contact-based methods have serious limitations for monitoring vital signs of neonates as they have extremely sensitive skin and most contact-based vital sign monitoring techniques result in skin abrasions, peeling and damage every time the leads or patches are removed. This results in potentially dangerous sites for infection increasing the mortality risk to the neonates.

Our Solution

We propose to use normal camera to measure the vital signs of a patient by simply recording video of their face in a non-contact manner. From the recorded video of the face, our algorithm, distancePPG, extracts pulse rate (PR), pulse rate variability (PRV) and breathing rate (BR). The algorithm is based on estimating tiny changes in skin color due to changes in blood volume underneath the skin surface (these changes are invisible to the naked eye, but can be captured by a camera).

Our algorithm, distancePPG (patent pending), achieves clinical grade accuracy for all skin tones, under low light conditions and can account for natural motion of subjects. It does so by intelligently combining skin color change signal from different regions of the visible skin in a manner that improves the overall signal strength. Our algorithm results in as much as 6dB of SNR improvement in harsh scenarios, rapidly expanding the scope, viability, reach and utility of CameraVitals as a replacement for traditional contact-based vital sign monitor.

paper image

Four basic steps in distancePPG: Step (i) Extract landmark points such as eyes, nose, mouth and face boundary from face image, Step (ii) Face is divided into seven regions, each region tracked over the video using computer vision tracker, Step (iii) Each tracked region is further divided into small regions of interest (ROI), Step (iv) DistancePPG computes the goodness metric associated with each ROI based only on the video recordings, and estimate camera-based PPG signal with much higher SNR (signal to noise ratio). For more details, please read our paper.

Media Coverage

Paper

M. Kumar, A. Veeraraghavan, and A. Sabharwal, "DistancePPG: Robust non-contact vital signs monitoring using a camera," Biomed. Opt. Express 6, 1565-1588 (2015) (Patent Pending)

Cite (BibTex)

@article{kumar_distanceppg:_2015,
author = {Kumar, Mayank and Veeraraghavan, Ashok and Sabharwal, Ashutosh},
doi = {10.1364/BOE.6.001565},
issn = {2156-7085},
journal = {Biomedical Optics Express},
mendeley-groups = {photoplethysmography},
month = may,
number = {5},
pages = {1565},
shorttitle = {DistancePPG},
title = {{DistancePPG: Robust non-contact vital signs monitoring using a camera}},
url = {https://www.osapublishing.org/boe/abstract.cfm?uri=boe-6-5-1565},
volume = {6},
year = {2015}
}

Vital sign and PPG Recovered from videos

Non-Caucasian skin tones

Sample distancePPG performance on a brown skin tone subject. DistancePPG works for all skin tones.

Motion scenario

Sample distancePPG performance under motion scenario (talking). Performance deteriorates only when there is large motion or occlusion.

Low lighting conditions

Sample distancePPG performance under low light scenario (less than 100 lux). Performance does not deteriorate.

Dataset

We plan to release the dataset comprising of the video recording of people facing the camera and the simultaneous PPG recording from the pulse-oximeter attached to person's ear. We collected the dataset by varying three important parameters of interest for camera-based vital sign estimation: (i) skin tone of subject, (ii) motion, (iii) ambient light intensity. Details about the dataset can be found here

Funding

The project has been partially funded by National Science Foundation till date, combined with a Rice graduate student fellowship.