Stanford University

Intensive Care Unit Clinical Pathway Support


Activity detection in Intensive Care Units (ICUs) is currently performed manually by trained personnel, primarily nurses, who log the activities as they occur. This process is both expensive and time consuming. Our goal is to design a system which automatically gives an annotated list of all activities that occurred in the ICU over the day. Overall, this system will reduce the monitoring workload of trained personnel, and lead to a quicker and safer recovery of the patient, while providing benefits such as activity-based costing.

Activity recognition in hospitals is a task that has not received much attention in the past. Some of the main reasons for this gap in research are the lack of sensors installed in hospitals and the difficulty in obtaining access to the relevant data due to its sensitive nature. Thanks to our partnering hospital, we have access to depth sensors installed in eight intensive care unit (ICU) rooms.

We are developing a computer vision system capable of automatically detecting the following activities:

  • Stage 1: patient getting out of bed, patient getting out of bed and walking, and a nurse performing oral care.
  • Stage 2: clinician performing ultrasound, x-ray, turning patient over in bed, and patient getting in/out of bed.
  • Stage 3: various patient mobility activities such as patient getting in/out of a bed/chair with or without assistance.

Once our system can successfully log the basic activities above, we plan to expand it to detect anomalies such as emergency situations. To do so, we could potentially use a dataset of simulations of different emergencies (e.g., patient falls on the floor).

We have partnered with Intermountain's Healthcare Transformation Lab where we have deployed 3D depth sensors in eight ICU rooms. With the help of Intermountain, we are using live data streams to teach our computer vision algorithms to discern events of clinical relevance. Using multiple sensors per room, our artificial intelligence system is capable of full-room activity understanding.


William Beninati

Julia Lee

Serena Yeung
Stanford AI Lab


Vision-Based Prediction of ICU Mobility Care Activities using Recurrent Neural Networks

Gabriel M. Bianconi, Rishab Mehra, Serena Yeung, Francesca Salipur, Jeffrey Jopling, Lance Downing, Albert Haque, Alexandre Alahi, Brandi Campbell, Kayla Deru, William Beninati, Arnold Milstein, Li Fei-Fei

Machine Learning for Health Workshop, Neural Information Processing Systems (NIPS)
December 2017



Click the box to show contact information