711 HPW researches Human-Machine Trust

  • Published
  • By Gina Marie Giardina
  • 711th Human Performance Wing
WRIGHT-PATTERSON AIR FORCE BASE, Ohio -- A research team in the 711th Human Performance Wing’s Airman Systems Directorate here focuses on how humans make reliance decisions with technology - or in other words, how humans develop, maintain and lose trust.

 

At first glance, this may seem like a small feat, but Dr. Joseph Lyons, Human Trust and Interaction Branch technical advisor, explained that there are many variables to consider when thinking about the entire trust process.

 

“We look at everything from individual differences—things that are stable about each person such as personality, experiences and biases—to personal preferences for interacting with people or machines--to machine performance and other characteristics,” said Lyons. “These are all things we bring to any situation and this is how we make decisions about whether or not to trust a machine.”

 

Lyons explained that a person’s various traits paired with the initial impression creates information, which then guides behavior and interaction with the technology or machines. Trusting, he continued, is about the willingness to be vulnerable to that machine.

 

An everyday example is the automated cruise control in a car, said Lyons. “If you have it and use it, then you obviously trust it.  If you have it, but you never use it, then that could be a form of distrust.”

 

The key question, explained Lyons, is why do you use it or not use it? He went on to explain that machines are being given more and more decision authority, so as they get more authority and they become more capable, our appropriate reliance on these technologies is important.

 

“Studies have shown that the greater the autonomy in technology, the greater risk to trust when mistakes are made. This is why it is important to develop the appropriate trust of technology,” said Lyons.

 

“Auto-pilot is an example," he said. "If a pilot flying a plane is totally dependent on auto pilot and it makes a mistake, that pilot’s overreliance on the system may lead to an error in decision making."

 

Lyons’ team also looks at the technology itself—like what types of interactions does it have with the user, or what information does it display and its familiarity as perceived by the user.

 

“For familiarity, think about something like a GPS," noted Lyons. "There are lots of different GPS’s out there now, but the first one that came out had to deal with a lot of calibration issues."

 

“But as more and more systems came on board, people became more familiar with what a GPS was, to the point that they trust them without questioning” he said. “In fact today there are cases where people will follow GPS guidance into a lake. Clearly, that is an example of too much trust.”

 

Autonomous cars will likely be similar, Lyons predicted. “How familiar people are with the technologies makes a big difference in terms of how skeptical they will be.”

 

Another piece of research that the Human Trust and Interaction team looks at is transparency.

 

“Transparency is the idea of developing shared awareness and shared intent between a human and a machine,” explained Lyons. “It might sound funny to talk about the intent of a machine, but if we’re giving these things some level of decision-making autonomy, the intent –even if just perceived—of that system makes a big difference in how we interact with it.”

 

The purpose of incorporating a machine or technology with a human is to add benefit and improve performance. While Lyons and his team research the designs that work, they also find the designs that do not work and analyze why.

 

“The paper clip cartoon character that used to pop up on computer screens is a good general example of a design that did not add benefit,” said Lyons. “Most people were just annoyed with it because it had zero shared intent with its users and got in their way.”

 

But a design that does work is the Automatic Ground Collision Avoidance System. Pioneered by a partnership between the Air Force Research Laboratory, NASA, the Air Force Test Center and Lockheed Martin, this system is designed to reduce the number of accidents due to controlled flight into terrain, or CFIT, which is a leading cause of fatalities of pilots. The system briefly takes over controls and corrects the flight path if a pilot loses consciousness due to G-force induced loss of consciousness, or G-LOC.

 

But do pilots trust this system?

 

“We go out and try to gauge pilots’ trust in this new technology that just so happens to take full control of the pilot’s aircraft, which initially pilots did not really like," explained Lyons. "But Auto GCAS is out in the field and it has saved the lives of four pilots since 2014." 


So while the pilots might not have initially liked the idea of this technology, said Lyons, over time, they’ve learned to appreciate that it can save their lives. "And not only that, but it does not interfere very often.”

 

Lyons said that part of his team’s focus with Auto GCAS is also to try to see if and when the system does interfere, and feed that information back to the stakeholders so that they can continue to improve the effectiveness of the system.

 

While the branch works with fielded systems like Auto GCAS, a considerable part of their research is experimental in nature.

 

“We will work to simulate a human-machine interaction of some kind, and study the factors that shape trust in that context,” Lyons explained.

 

A few of the studies, Lyons said, include focusing on things like the impact of multitasking and different error types (e.g., false alarms versus misses), looking at individual differences such as executive functioning capabilities, trait trust/suspicion, and automation schema, among many others. He noted that they look at things like how software engineers develop trust of code, and they are also starting to explore the impact of tactile cues on trust.

 

“We can think of everything from a manned fighter pilot working with an unmanned robotic system and they are flying together for a mission—how does the trust dynamic work in that relationship? All the way down to a collaborative robot that a person works with side-by-side in a workspace--and what cues does it need to give so that person is not scared every time it moves its arm towards him?” he explained.

 

“Looking into the future, where we’re headed with all this, is looking at trust of autonomy,” he said. “If we are going to give an intelligent agent a capability to do something in relation to us, we really need to understand the trust process in relation to that system.”

 

The Airman Systems Directorate is one of three directorates in the 711th Human Performance Wing, which is part of the Air Force Research Laboratory headquartered at Wright-Patt.