Research in the Humans and Autonomy Lab (HAL*) focuses on the multifaceted interactions of human and computer decision-making in complex sociotechnical systems with embedded autonomy.
Given the explosion of autonomous technology in aviation, medicine, and even in everyday mundane environments like driving, the need for humans as supervisors of and collaborators in complex autonomous control systems has replaced the need for humans in direct manual control.
Instead of relying on humans for well-rehearsed skill execution and rule following that requires significant practice and memorization (and subject to problems such as fatigue and boredom), autonomous systems need humans for their more abstract levels of knowledge synthesis, judgment, and reasoning. Autonomous systems today, and even more so in the future, require coordination and teamwork for mutual support between humans and machines for both improved system safety and performance.
Employing human-systems engineering principles to autonomous system modeling, design, and evaluation, and identifying ways in which humans and computers can leverage the strengths of the other in an autonomous system to achieve superior decisions together is the central focus of HAL.
Current research projects include workload reduction for operators of autonomous systems, identification and assessment of the impact of transition from low to high task loading, development of a safety simulation for mixed manned and unmanned robotic environments, developing predictive models for operator trust of autonomous systems, and the use of Functional Near-Infrared Spectroscopy (fNIRS) for cognitive state detection and prediction.
*HAL was previously known as the Humans and Automation Laboratory at MIT and was moved to Duke University in the Fall of 2013. See http://web.mit.edu/aeroastro/labs/halab/index.shtml for archival information about HAL 1.0.