Mickeal Verschoor, Dan Casas, and Miguel A. Otaduy
ACM Transactions on Graphics (Proc. of ACM SIGGRAPH), Volume 39, Number 4 – july 2020 (pdf)
Abstract: We present a method to render virtual touch, such that the stimulus produced by a tactile device on a user’s skin matches the stimulus computed in a virtual environment simulation. To achieve this, we solve the inverse mapping from skin stimulus to device configuration thanks to a novel optimization algorithm. Within this algorithm, we use a device-skin simulation model to estimate rendered stimuli, we account for trajectory-dependent effects efficiently by decoupling the computation of the friction state from the optimization of device configuration, and we accelerate computations using a neural-network approximation of the device-skin model. Altogether, we enable real-time tactile rendering of rich interactions including smooth rolling, but also contact with edges, or frictional stick-slip motion. We validate our algorithm both qualitatively through user experiments, and quantitatively on a BioTac biomimetic finger sensor.
Summary: In this paper we present a tactile rendering method to produce a virtual touch when a user is touching objects or surfaces in virtual environments. The touch sensation is produced by small thimble devices which are attached on the users’ fingers. These thimble have a small disc that pushes on the finger pad. By changing the orientation and depth of this disc, various stress distributions inside the human finger can be produced. When the user is working in a virtual environment, the goal is to produce a stimulus when the user is touching a surface or object in the virtual environment.
Using hand simulator CLAP, the user is able to interact with the virtual environment in a natural way. Since CLAP simulates the human skin using a FEM simulation, the simulated stress distribution can be used as input to our method. The challenge here is that the work space of the thimble device is limited, meaning that it can produce a very small range of stress distributions. Contrary, the user can touch arbitrary virtual objects or surfaces, i.e., the obtained stress distribution from the simulation is very rich. To produce an as close as possible stress distribution inside the human finger that matches the one from the simulation, an optimization method is used.
Our approach runs a full optimization method that finds for a specific target stress, the corresponding device configuration. To do so, we need to have a mapping from device configuration to rendered stress. Potentially, another instance of the same hand simulation can be used to produce these stress distributions for a given device state. Unfortunately, this is too computationally demanding. Instead, a Neural Network is used to learn this mapping. By collecting stress distributions for a large range of device configurations, we can train a Neural Network that learns the mapping from device state to stress. By using the obtained Neural Network inside an optimization loop, we are able to find a device configuration that is as close as possible to the target stress. Because the Neural Network is used inside the optimization loop and the optimization method can not produce device states that are out of the workspace of the device, we can not use the Neural Network outside its learned space. Hence, no extrapolation occurs.
Results: We have tested and validated the rendering method using a BioTac sensor. We have trained a Neural Network from a large amount of sensor readings, labeled with corresponding device configurations. After learning the mapping from device to sensor readings, the obtained Neural Network is used inside the optimization loop. To validate the method, we presented a target stimulus on the BioTac sensor, optimized the device configuration for the presented input, and used the obtained device configuration to control the device while attached on the BioTac sensor. 1) when the stimulus was generated by the thimble device itself, the target and the obtained sensor values were very similar. 2) in case of interactions with flat surfaces, also the obtained sensor values were very close to the target sensor values. 3) for arbitrary interactions the method was able to produce a close match between target and obtained sensor values. In case the target sensor values were outside the training data, the method computed a reasonable match. This validation demonstrates that the method is able to find, for a given target stress description, a device configuration that produces a stress description that is very similar or close by the target description.
The same optimization procedure is used to drive the thimble device using target stress values from the simulation controlled by the user. In this case we enhanced the device configuration state with an additional friction state. Since the hand simulation provides us with this information, and knowing that different device configurations with different friction states could potentially lead to the same stress state inside the hand simulation, we added the a friction descriptor to the device configuration state. As a result, the Neural Network was able to better distinguish between various states, which improved the optimization procedure.
To validate that a stress based approach provided the user with a richer ‘touch’ compared to pure geometric based approached, a user study was conducted. The main conclusion of the user study is that a stress based approach can better reproduce certain stimuli than a geometric approach could do. For example, our stress based approach was able to reproduce stimuli that were obtained by sliding a finger over a surface. Where pure geometric based methods would not see a difference in such cases, a stress based approach will produce different renderings when the finger is sliding and when the finger is just touching.