I have been conducting research on interactive technologies and systems since 1995 when I began studying robotics with focus on robot control and planning for master. For Ph.D, I majored in haptics with emphasis on rendering and perception, and as a post-doctoral researcher, I had an opportunity for exposure to large-scale virtual reality systems at the Envision Center for Data Perceptualization. Then, with my research group at POSTECH, I have had a privilege of exploring various interesting and important research topics with my talented post-doctoral researchers, graduate students, and sometimes undergraduate students. I have published over 220 international research articles (as of 2025), and my research results have been applied to mobile devices, automobiles, and consumer electronics. I will briefly introduce some of those research projects below, which is and will be under construction with new items being added!
In this collaborative project for eight years, we develop an effective system that can convey the Korean language through the tactile sensory channel, with a long term goal of replacing or enhancing Korean Braille. We will also study how the tactile communication for Korean will be processed in the human brain.
We work with two external research groups, one with Prof. Joonbum Bae and the other with Prof. Sung-Phil Kim. Prof. Bae develops a comfortable wearable multimodal haptic display. My team at POSTECH seeks effective coding methods to represent Korean using multimodal tactile stimuli and iteratively enhance the methods based on longitudinal user studies. Finally, Prof. Kim assesses the performance of our system by looking into the users' brains.
This research is part of the ITRC for human cognition-intelligence augmentation, which I have been leading as a director since 2024.
Share autonomy refers to the situation in which both a human operator and a robot have intelligence and collaborate autonomously, at least to some degree, to perform tasks. For example, instead of tele-operating a robot's action continuously using a joy stick, the human operator can simply make a verbal command, such as "move right" and "pick up the object."
Since 2002, my group has participated in a large project that has keywords of robot teleoperation, shared autonomy, and XR, funded by NRF under the Future Convergence Pioneer Program. The team includes researchers in robotics, XR, haptics, and communication.
The tasks of my group include:
Multisensory interaction methods for users in XR environments effective for robot teleoperation based on shared autonomy
Gesture-based control of autonomy-equipped robots
Haptic cueing methods to users delivering the robot's states and events
XR-based learning process for users to learn robot control methods based on shared autonomy
One of the major challenges that haptics research faces is the lack of content that allows users to fully appreciate the added benefits of haptic effects. A key reason is that making good haptic-associated content requires specialized authoring software and substantial expertise from content designers and developers. Even in such an ideal environment, manual authoring is a lengthy and laborious process. Responding to this need, my research group has been developing algorithms that automatically generate haptic content from audiovisual data, which can be used independently or as part of multisensory authoring programs.
In this project, we aim to create core algorithms that convert sound to haptic effects to provide visual-audio-tactile multisensory effects. Compared to our previous achievements in this research space, the current emphasis is on semantic, full body, and accessibility:
Semantic: The conversion considers the semantics of sounds, which means we classify the types of sound and make the conversion tailored to the sound type.
Full body: Haptic effects are presented to the entire body of a user using a full-body haptic suit, like in the movie Ready Player One.
Accessibility: Users with hearing loss and hearing users can enjoy collaborating, e.g., playing a VR game together, in a metaverse while undergoing similar sensory and user experiences.
This project has been supported by the National Research Foundation (NRF) of Korea under the mid-career research program since 2022. It has enabled us to publish a good number of research papers, and some major publications are as follows:
Gyeore Yun, Minjae Mun, Jungeun Lee, Dong-Geun Kim, Hong Z Tan, and Seungmoon Choi, “Generating Real-Time, Selective, and Multimodal Haptic Effects from Sound for Gaming Experience Enhancement,” In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI), Article No. 315, pp. 1-17, April 23-28, 2023.
Dong-Geun Kim, Jungeun Lee, Gyeore Yun, Hong Z. Tan, and Seungmoon Choi, “Sound-to-Touch Crossmodal Pitch Matching for Short Sounds,” IEEE Transactions on Haptics, vol. 17, no. 1, pp. 2-7, 2024 (Also presented in the IEEE Haptics Symposium 2024).
Jiwan Lee, Gyeore Yun, and Seungmoon Choi, “Audiovisual-Haptic Simultaneity Across the Body in Gameplay Viewing Experiences,” In Haptics: Understanding Touch; Technology and Systems; Applications and Interaction. (Proceedings of EuroHaptics 2024), Lecture Notes in Computer Science, vol. 14768, pp. 43-55, June 30-July 3, 2024.
Gyeore Yun and Seungmoon Choi, “Real-time Semantic Full-Body Haptic Feedback Converted from Sound for Virtual Reality Gameplay,” In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI), Article No. 497, pp. 1-17, 2025.
Dajin Lee and Seungmoon Choi, “Perceptual Alignment of Spatial Auditory and Tactile Stimuli for Effective Directional Cueing,” IEEE Transactions on Visualization and Computer Graphics, vol. 31, no. 5, pp. 2589-2599, 2025 (Also presented in the 2025 IEEE Conference on Virtual Reality and 3D User Interfaces).