Human-robot interaction could be greatly enhanced if we understand better how humans interpret robot behaviors mimicking human expressions of emotion. Using as our starting point, a pre-designed set of such robot animations, we seek to increase its usability by labeling it with continuous interval ratings of perceived valence and arousal. To derive the ratings and explore how humans perceive dynamic expressions of emotion, when performed by a humanoid robot, we conducted an experiment with 20 participants. The result demonstrates a good distribution of robot behaviors across the valence-arousal space with a significantly reduced distribution of valence scores for behaviors with low perceived arousal. We used inter-rater reliability methods to validate the consistency of the ratings providing a reusable animation set with validated emotion labels.
Bioengineering