Research‎ > ‎

Trust in haptic assistance

With advancements in augmented reality for intelligent assistance systems, such as haptic assistance or superimposed visualizations, people must learn to rely on additional, artificially generated cues. Realistically, the accuracy of this artificial cue may be affected by the sensors (e.g., offset, saturation) or models (e.g., mismatch with reality) used by the intelligent system to generate the assistance.

I am currently studying how people learn to trust an artificial haptic cue from an assistance system that is not always accurate. Inspiration is taken from neuroscience studies on the reliability-based weighting of naturally occuring sensory cues (e.g., vision, proprioception).

Preliminary results show that people can use the history of inaccuracies to appropriately adjust their trust in the haptic assistance. The implications of these results can be favorable for human-machine interaction under real world conditions; intelligent assistance systems do not have to be perfect because humans can adjust their trust in the system appropriate to its accuracy and reliability.


T. L. Gibo, W. Mugge, and D. A. Abbink.
Trust in haptic assistance: weighting visual and haptic cues based on error history
Experimental Brain Research, 235(8): 2533-2546, 2017.