On any typical day, we spend most of our time in the indoor environment. Several studies in the past manifested the impact of physical environment (surrounding us) on various physiological and psychological parameters of the occupants. Often, textures, colors, shapes, temperature, sounds, and lighting conditions find their way into our cognitive schemas and serve as cues to retrieve skills, knowledge, feelings, and behaviors. In this study, we are particularly interested in exploring three primary questions:
- An infrared camera (to capture thermal images and videos) could be a great alternative to medical sensing devices, such as Zephyr’s BioHarness3 and Empatica E4 wristband. How accurately thermal images and video analysis (from an IR camera) can estimate cognitive load in variant ambient temperature?
- If by analyzing thermal images and videos we can accurately estimate cognitive load (especially stress level), can we reduce the load by altering the room temperature and air flow around the occupants?
- At a later stage, we aim to study the design of a workspace that is capable of digitally transforming characteristics of its ambiance. Can we leverage machine learning techniques to build an algorithm to recommend and transform parameters of a multimodal environment?
We use commercially available video display, audio speakers, lighting fixtures, thermal control (through space fan and heater), and olfactory display to prototype such a workspace. Our workspace obtains information about the occupant’s task and recognizes indicators of his effective state. Currently, the work is in progress.
Hardware: Zephyr BioHarness3, Empatica E4 Wristband, Flir Duo
Technologies: C++, Python, OpenCV
Teammates: Dr. Nan Zhao