Audio-Tactile Proximity Feedback for Enhancing 3D Manipulation

Audio-Tactile Proximity Feedback for Enhancing 3D Manipulation

Table of Contents


In the presence of conflicting or ambiguous visual cues in complex scenes, performing 3D selection and manipulation tasks can be challenging. To improve motor planning and coordination, we explore audio-tactile cues to inform the user about the presence of objects in hand proximity, e.g., to avoid unwanted object penetrations. We do so through a novel glove-based tactile interface, enhanced by audio cues. Through two user studies, we illustrate that proximity guidance cues improve spatial awareness, hand motions, and collision avoidance behaviors, and show how proximity cues in combination with collision and friction cues can significantly improve performance.


• Human-centered computing→Haptic devices, Auditory feedback, Interaction techniques;


Tactile feedback, 3D user interface, hand guidance


Despite advances in the field of 3D user interfaces, many challenges remain unsolved [32]. For example, it is still difficult to provide high-fidelity, multisensory feedback [30]. However, as in real life, there are many tasks that depend on multisensory cues. For example, in complex or dense scenes, 3D interaction can be difficult: hand motions are hard to plan and control in the presence of ambiguous or conflicting visual cues, which can lead to depth interpretation issues in current unimodal 3D user interfaces. This, in turn, can limit task performance [32]. Here, we focus on 3D manipulation tasks in complex scenes. Consider a virtual reality training assembly procedure [6], in which a tool is selected and moved through a confined space by hand, and then using the tool to turn a screw. Here, multiple visual and somatosensory (haptic) cues need to be integrated to perform the task. A typical problem during manipulation in unimodal interfaces in such scenarios is hand-object penetration, where the hand passes unintendedly through an object. Such object penetrations can occur frequently, especially when users cannot accurately judge the spatial configuration of the scene around the hand, making movement planning and correction difficult. However, similar to real-world scenarios, multisensory cues can disambiguate conflicting visual cues, optimizing 3D interaction performance [53]. Cues can be used proactively and adaptively, affording flexible behaviour during task performance [53].

Motor Planning and Coordination

Planning and coordination of selection and manipulation tasks is generally performed along a task chain with key control points. These control points typically relate to contact-driven biomechanical actions [22]. As such, they contain touch cues that relate to events about touching objects to select them (selection) or move along a trajectory (manipulation). This may contain various hand motion and pose actions that are performed within the scene context, e.g., for steering the hand during manipulation tasks. There should be sufficient indication as to where the hands touches objects upon impact (collision contact points) or slides along with them (friction contact points), while other indications, such as object shape or texture, can also be beneficial [24]. Multisensory stimuli enable learning of sensorimotor correlations that guide future actions, e.g., via corrective action patterns to avoid touching (or penetrating) an object [22]. In real-life, to steer hand motions and poses, we depend typically on visual and physical constraints. E.g., lightly touching a surrounding object might trigger a corrective motion. However, manipulation tasks are also performed independent of touch cues, namely through self-generated proprioceptive cues [38]. Such cues may have been acquired through motor learning [47]. Although not the main focus of this work, motor learning can be an important aspect of skill transfer between a 3D training application and the real-world [11, 28], thereby potentially also “internalizing” proprioception-driven actions for later recall.

Research questions

Our novel guidance approach, which is described in more detail in section 3, is based on audio-tactile proximity feedback to communicate the direction and distance of objects surrounding the user’s hand. Feedback is used to plan and coordinate hand motion in 3D scenes. Our research is driven by the following research questions (RQs) that assess how we can guide the hand motion before and during 3D manipulation tasks using such feedback.

RQ1. Do scene-driven proximity cues improve spatial awareness while exploring the scene?

RQ2. Can hand-driven proximity cues avoid unwanted object penetration or even touching proximate objects during manipulation tasks?

In this paper, we measure the effect of proximity cues in combination with other haptic cue types (in particular collision and friction). Towards this goal, study 1 (scene exploration) explores the general usefulness of proximity cues for spatial awareness and briefly looks at selection, while study 2 looks specifically at the effect of proximity on 3D manipulation tasks. In our studies, we specifically look at touch and motion aspects, while leaving support for pose optimization as future work. As a first step, we focus on feedback independently of visual cues, to avoid confounds or constraints imposed by such cues.


Our research extends previous work by Ariza et al. [3] that looked into low resolution and non-directional proximity feedback for 3D selection purposes. We provide new insights into this area of research by looking at higher-resolution and directional cues for manipulation (instead of selection) tasks. Our studies illustrate the following benefits of our introduced system: • In the scene exploration task, we show that providing proximity feedback aids spatial awareness through a higher number of tactors (18 vs. 6), which improves both proximity feedback (20.6%) and contact point perception (30.6%). While the latter is not unexpected, the results indicate the usefulness of a higher-resolution tactile feedback device. • We illustrate how the addition of either audio or tactile proximity cues can reduce the number of object collisions up to 30.3% and errors (object pass-throughs) up to 56.4%. • Finally, while friction cues do not show a significant effect on measured performance, subjective performance ratings increase substantially, as users thought that with friction (touch) they could perform faster (18.8%), more precisely (21.4%), and react quicker to adjust hand motion (20.7%).


In this work, we explored new approaches to provide proximity cues about objects around the hand to improve hand motor planning and action coordination during 3D interaction. We investigated the usefulness of two feedback models, outside-in and inside-out, for spatial exploration and manipulation. Such guidance can be highly useful for 3D interaction in applications that suffer from, e.g., visual occlusions. We showed that proximity cues can significantly improve spatial awareness and performance by reducing the number of object collisions and errors, addressing some of the main problems associated with motor planning and action coordination in scenes with visual constraints, which also reduced inadvertent pass-through behaviors. As such, our results can inform the development of novel 3D manipulation techniques that use tactile feedback to improve interaction performance. A logical next step requires integrating our new methods into the actual 3D selection and manipulation techniques, while also studying the interplay with different forms of visualization (e.g., [51]) in application scenarios. In due course, the usage and usefulness of two gloves with audio-tactile cues is an interesting avenue of future work, e..g, to see if audio cues can be mapped to a certain hand. Furthermore, we currently focused only on haptic feedback to eliminate the potential effects of any given visualization method, such as depth perception issues caused by transparency. Finally, we are looking at creating a wireless version of the glove and to improve tracking further, e.g., by using multiple Leap Motion cameras [21].

About KSRA

The Kavian Scientific Research Association (KSRA) is a non-profit research organization to provide research / educational services in December 2013. The members of the community had formed a virtual group on the Viber social network. The core of the Kavian Scientific Association was formed with these members as founders. These individuals, led by Professor Siavosh Kaviani, decided to launch a scientific / research association with an emphasis on education.

KSRA research association, as a non-profit research firm, is committed to providing research services in the field of knowledge. The main beneficiaries of this association are public or private knowledge-based companies, students, researchers, researchers, professors, universities, and industrial and semi-industrial centers around the world.

Our main services Based on Education for all Spectrum people in the world. We want to make an integration between researches and educations. We believe education is the main right of Human beings. So our services should be concentrated on inclusive education.

The KSRA team partners with local under-served communities around the world to improve the access to and quality of knowledge based on education, amplify and augment learning programs where they exist, and create new opportunities for e-learning where traditional education systems are lacking or non-existent.

FULL Paper PDF file:

Audio-Tactile Proximity Feedback for Enhancing 3D Manipulation



Alexander Marquardt Bonn-Rhein-Sieg University of Applied Sciences Sankt Augustin, Germany
Ernst Kruijff Bonn-Rhein-Sieg University of Applied Sciences Sankt Augustin, Germany Christina Trepkowski Bonn-Rhein-Sieg University of Applied Sciences Sankt Augustin, Germany
Jens Maiero Bonn-Rhein-Sieg University of Applied Sciences Sankt Augustin, Germany Andrea Schwandt Bonn-Rhein-Sieg University of Applied Sciences Sankt Augustin, Germany
André Hinkenjann Bonn-Rhein-Sieg University of Applied Sciences Sankt Augustin, Germany
Wolfgang Stuerzlinger Simon Fraser University Surrey, Canada
Johannes Schöning University of Bremen Bremen, Germany




Audio-Tactile Proximity Feedback for Enhancing 3D Manipulation

Publish in

VRST ’18: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology


PDF reference and original file: Click here



Website | + posts

Nasim Gazerani was born in 1983 in Arak. She holds a Master's degree in Software Engineering from UM University of Malaysia.

Website | + posts

Professor Siavosh Kaviani was born in 1961 in Tehran. He had a professorship. He holds a Ph.D. in Software Engineering from the QL University of Software Development Methodology and an honorary Ph.D. from the University of Chelsea.

+ posts

Somayeh Nosrati was born in 1982 in Tehran. She holds a Master's degree in artificial intelligence from Khatam University of Tehran.