Research

 POSTERS

 HAL Lab Poster

 Technological Solutions to Optimize Short-haul Rail Operator Workload

Modeling Intent Communication Pathways for Human-Autonomous System Collaboration

 

Determining UAV/UGV Training Effectiveness as Autonomy Increases

The Army has plans to make significant investments in increasing the level of autonomy on board both unmanned aerial and unmanned ground vehicles (UAVs and UGVs) to increase mission effectiveness while decreasing operational and training costs. However, to date there is no principled methodology to understand how increased autonomy could change the training requirements and how such change could be assessed in advance to inform engineers developing new systems.

To help with this effort, we are currently developing a System Dynamics model of the interactions between training and unmanned vehicle/ground control system design to:

  •      Aid in the assessment of current training and development of future training programs
  •      Determine how increasing autonomy may affect various training objectives
  •      Make predictions about how future robot systems should be designed in terms of hardware, software, and training programs

In particular, we are investigating how increasing autonomy affects training and system complexity in terms of Rasmussen’s hierarchy of skill, rule and knowledge-based behaviors, with further consideration of high uncertainty conditions where high levels of operator expertise are required.

Drones in Gabon (DIG)

We are partnering with the Nicholas School of the Environment, as well as a conservationist group at Wonga Wongue National Park in Gabon, Africa to develop a system for monitoring the African forest elephant, which is one of the most heavily poached animals in the world. The goals of this effort are to develop this system using a quadcopter, ground control station, and thermal video camera such that the elephants can be monitored from aerial vantage points during nighttime. While this task might seem trivial, the reasoning comes from the high uncertainties in current methods for estimating the African forest elephant population. There are several constraints to developing this system including:

  • Cost – the system must be inexpensive (under $2,000)
  • User-Friendly – the system must be simple to use and require little training for any operator, regardless of background or culture, to control with high confidence
  • Maintainable – the system must be easy to repair when crashes, which are at times inevitable, occur
  • Reliable – the system must be reliable in remote regions that might have dense canopy surroundings (i.e. communications need to be robust)

At the end of the project, we will have a system that meets the above criteria and will be given to conservationists at the Wonga Wongue National Park in Gabon. Comparable systems to the one described can cost several thousand dollars; however, the cost and maintainability of our system will allow for robustness and cost-efficient repairs that conservationists' budgets will allow for.

 

Modeling the impact of increasing autonomy on core cognitive abilities in unmanned system operation

We are working with the office of Naval Research to understand how changes in autonomy in unmanned systems might impact the skills and knowledge necessary to safely and effectively use these systems. Specifically, are "traditional" training programs that focus on prior manual operation still useful as autonomous systems move to more supervisory control approaches? Do traditional forms of training assist when there is an emergency for the system? Such training programs are expensive in both time and resources, and identifying critical training elements for safe and effective system operation could help streamline training program design and execution. To investigate these questions, we are designing a human-subjects experiment examining how participants who receive various levels of training are able to control and utilize a UAV in a mock disaster response scenario. The results will provide insights into how the capabilities of the autonomy should impact the design and implementation of training programs for these systems.

Detecting Long Distance Driver Cognitive Disengagement

Self-driving cars have the potential to transform personal transportation in terms of both safety and efficiency. The biggest challenge for driverless cars is not the technology, but the integration of the human driver into this deceivingly complex system. While humans working collaboratively with autonomous systems can achieve performance greater than either could alone, due to imperfect systems and fallible human reasoning and attention, such interactions can also cause degraded, and possibly failed, system performance. In this project, we are testing the hypothesis that functional Near Infrared Spectroscopy (fNIRS), which essentially measures blood oxygenation in the brain, can detect different cognitive states in long distance driving settings. Objectives include attempting to detect distraction from focused attention on the road, discerning possible boredom from a state of drowsiness, and overall mental workload.

A Systems-Theoretic Computational Model for Rail Dispatch/Operations Centers

With the widespread application of positive train control in the United States, how to understand the implication of such advanced technologies on future staffing, safety, and overall system performance is going to become increasingly critical.  This  effort, sponsored by the Federal Railroad Administration,  proposes  to  develop  a  systems  theoretic  computational  model  of  the  locomotive  crew  and  their  notional  rail  dispatch/operations  center, validate  this  model,  and  then  demonstrate  how  this  model  could  be  used  to  help answer questions on safety and economic improvements important to the future of the transportation industry. Click Here for Modeling Data

Risk-aware, Human-cooperative Planning for Autonomous Systems

The ability to manage risk is an indispensable part of human and machine intelligence when performing complex tasks under high uncertainty, ranging from military operations to space exploration. Although machine intelligence plays increasingly significant roles, in most cases humans are responsible for predicting and coping with risks, while robots simply execute a given plan without explicit awareness of risk. Our vision is to revise the relationship between human and robot to a cooperative partnership, where both parties share the responsibility of managing risks. The Humans and Autonomy Lab is collaborating with the Jet Propulsion Laboratory on this project, under the direction of the Office of Naval Research.

Modeling Intent Communication Pathways for Human-Autonomous System Collaboration

The purpose of this project is to determine how to design safe autonomous systems that have awareness of the intent of humans in and around a system, with reciprocal relationships for those same humans. While previous work in human-centered design includes development of interfaces that improve operator effectiveness, interfaces designed to communicate to external stakeholders based on the system’s perceived intent are limited. Thus, how to design autonomous systems to consider both the intent of the external individual and internal operator remains to be comprehensively explored. This effort includes developing models for computer interpretation of human intent, and identifying methods to communicate system intent to the operator and the exogenous actors. 

To verify the resulting models, we will determine how elements of the environment, the computation systems of the robots, and unique traits of humans can be modeled to represent intent communication pathways that need to be instantiated in the system or the world around the system. The effort will also focus on the needs of various stakeholders including endogenous and exogenous actors of autonomous systems (i.e., pedestrians and workers in a plant near a robotic forklift). The intent of applying the models to multiple domains is to attempt to generalize the model, determining not only whether the resulting intent models apply to endogenous operators and exogenous actors, but also to determine the extent to which the models of both entities generalize across domains.