Posts by Collection

portfolio

projects

publications

Color sommelier: Interactive color recommendation system based on community-generated color palettes

Published in Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software \& Technology, 2015

We present Color Sommelier, an interactive color recommendation system based on community-generated color palettes that helps users to choose harmonious colors on the fly. We used an item-based collaborative filtering technique with Adobe Color CC palettes in order to take advantage of their ratings that reflect the general public?s color harmony preferences. Every time a user chooses a color(s), Color Sommelier calculates how harmonious each of the remaining colors is with the chosen color(s). This interactive recommendation enables users to choose colors iteratively until they are satisfied. To illustrate the usefulness of the algorithm, we implemented a coloring application with a specially designed color chooser. With the chooser, users can intuitively recognize the harmony score of each color based on its bubble size and use the recommendations at their discretion. The Color Sommelier algorithm is flexible enough to be applicable to any color chooser in any software package and is easy to implement.

Recommended citation: KyoungHee Son, Seo Young Oh, Yongkwan Kim, Hayan Choi, Seok-Hyung Bae, and Ganguk Hwang. (2015). "Color sommelier: Interactive color recommendation system based on community-generated color palettes." Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software \& Technology.
Download Paper

Is any room really ok? the effect of room size and furniture on presence, narrative engagement, and usability during a space-adaptive augmented reality game

Published in 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2019

One of the main challenges in creating narrative-driven Augmented Reality (AR) content for Head Mounted Displays (HMDs) is to make them equally accessible and enjoyable in different types of indoor environments. However, little has been studied in regards to whether such content can indeed provide similar, if not the same, levels of experience across different spaces. To gain more understanding towards this issue, we examine the effect of room size and furniture on the player experience of Fragments, a space-adaptive, indoor AR crime-solving game created for the Microsoft HoloLens. The study compares factors of player experience in four types of spatial conditions: (1) Large Room - Fully Furnished; (2) Large Room - Scarcely Furnished; (3) Small Room - Fully Furnished; and (4) Small Room - Scarcely Furnished. Our results show that while large spaces facilitate a higher sense of presence and narrative engagement, fully-furnished rooms raise perceived workload. Based on our findings, we propose design suggestions that can support narrative-driven, space-adaptive indoor HMD-based AR content in delivering optimal experiences for various types of rooms.

Recommended citation: Jae-eun Shin, Hayun Kim, Callum Parker, Hyung-il Kim, Seo Young Oh, and Woontack Woo. (2019). "Is any room really ok? the effect of room size and furniture on presence, narrative engagement, and usability during a space-adaptive augmented reality game." 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
Download Paper

Finger contact in gesture interaction improves time-domain input accuracy in HMD-based augmented reality

Published in Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 2020

This paper reports that the time-domain accuracy of bare-hand interactions in HMD-based Augmented Reality can be improved by using finger contact: touching a finger with another or tapping one’s own hand. The activation of input can be precisely defined by the moment of finger contact, allowing the user to perform the input precisely at the desired moment. Finger contact is better suited to the user’s mental model, and natural tactile feedback from the fingertip also benefits the user with the self-perception of the input. The experimental results revealed that using finger contact is the preferred method of input that increases the time-domain accuracy and enables the user to be aware of the moment the input is activated.

Recommended citation: Seo Young Oh, Boram Yoon, Hyung-il Kim, and Woontack Woo. (2020). "Finger contact in gesture interaction improves time-domain input accuracy in HMD-based augmented reality." Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems.
Download Paper

Evaluating remote virtual hands models on social presence in hand-based 3d remote collaboration

Published in 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2020

This study investigates the effects of a virtual hand representation on the user experience including social presence during hand-based 3D remote collaboration. Although a remote hand appearance is a critical parts of a hand-based telepresence, it has been rarely studied in comparison to studies on the self-embodiment of virtual hands in a 3D environment. Thus, we conducted a user study comparing the three virtual hands models (Skeleton, Low Polygon and Realistic) while performing a remote collaborative task based on the American Sign Language (ASL) using both Augmented Reality (AR) and Virtual Reality (VR) environments. We found that the realistic type was perceived as the most sense of being together, human-like, and trustable representation. The low polygon model could also convey a clear sign and moderate level of social presence. Although the system was configured asymmetrically in AR and VR, little difference in perception was found except for the participant’s mental load and message understanding. We then discuss the results and suggest design implications for future hand-based 3D telepresence systems.

Recommended citation: Boram Yoon, Hyung-il Kim, Seo Young Oh, and Woontack Woo. (2020). "Evaluating remote virtual hands models on social presence in hand-based 3d remote collaboration." 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
Download Paper

Multi-scale mixed reality collaboration for digital twin

Published in 2021 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct), 2021

In this poster, we present a digital twin-based mixed reality system for remote collaboration with the size-scaling of the user and the space. The proposed system supports collaboration between an AR host user and a VR remote user by sharing a 3D digital twin of the AR host user. To enhance the coarse authoring of a shared digital twin environment, we provide a size scaling of the digital twin environment with the world-in-miniature view. Also, we enable scaling the size of the VR user’s avatar to enhance both coarse (size-up) and fine-grained (size-down) authoring of the digital twin environment. We describe the system setup, input methods, and interaction methods for scaling space and user.

Recommended citation: Hyung-il Kim, Taehei Kim, Eunhwa Song, Seo Young Oh, Dooyoung Kim, and Woontack Woo. (2021). "Multi-scale mixed reality collaboration for digital twin." 2021 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct).
Download Paper

Sense of Embodiment Inducement for People with Reduced Lower-body Mobility and Sensations with Partial-Visuomotor Stimulation

Published in ACM SIGGRAPH 2022 Emerging Technologies, 2022

To induce the Sense of Embodiment (SoE) on the virtual 3D avatar during a Virtual Reality (VR) walking scenario, VR interfaces have employed the visuotactile or visuomotor approaches. However, people with reduced lower-body mobility and sensation (PRLMS) who are incapable of feeling or moving their legs would find this task extremely challenging. Here, we propose an upper-body motion tracking-based partial-visuomotor technique to induce SoE and positive feedback for PRLMS patients. We design partial-visuomotor stimulation consisting of two distinctive inputs (Button Control & Upper Motion tracking) and outputs (wheelchair motion & Gait Motion). The preliminary user study was conducted to explore subjective preference with qualitative feedback. From the qualitative study result, we observed the positive response on the partial-visuomotor regarding SoE in the asynchronous VR experience for PRLMS.

Recommended citation: Hyuckjin Jang, Taehei Kim, Seo Young Oh, Jeongmi Lee, Sunghee Lee, and Sang Ho Yoon. (2022). "Sense of Embodiment Inducement for People with Reduced Lower-body Mobility and Sensations with Partial-Visuomotor Stimulation." ACM SIGGRAPH 2022 Emerging Technologies.
Download Paper

Art Rich: Place Your AR Artwork

Published in 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2022

We propose Art Rich, an artwork augmentation service that helps users choose artworks to decorate their personal space without vis-iting the art fairs or galleries. We recommend artworks that match the color scheme of their rooms. By developing it in a Unity environment, augmented services could be implemented, and using the k-means algorithm, the primary color in the user’s room is extracted. The extracted color is compared to artworks’ colors, and artworks with similar or complementary colors are recommended. In addition, by measuring the length of the area to place the artwork, users can determine the size of the artwork. This service allows users to place artworks in their space and even purchase them. This could meet the art tech needs of the MZ generation, who want to quickly select artworks within their budget without visiting places in person. From an artist’s point of view, it functions as a platform by inserting a link to information about their artwork and exhibition history.

Recommended citation: Jieon Du, Sohyun Park, Joosun Yum, Zeynep Özge Özdemir, Dooyoung Kim, Seo Young Oh, and Sang Ho Yoon. (2022). "Art Rich: Place Your AR Artwork." 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct).
Download Paper

Bring store in my room: Ar store authoring system for spatial experience in mobile shopping

Published in 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2022

We propose an AR store authoring system that enables the users to customize a physical space to promote spatial experience in mobile AR shopping. The current state of AR shopping research attempts to enhance the shopping experience by presenting virtual items onto the user’s space. However, under the notion that the stores and products are inseparable in creating a holistic shopping experience, we set to make a shopping experience that encompasses both elements. Instead of simply augmenting a whole store, this study introduces a space adaptive AR store that divides a virtual store into multiple sections and places them onto physical space according to the labeled surfaces. This prototype mobile application intends to deliver a complete spatial shopping experience by allowing the users to customize the stores in their space and ultimately attach the store experience to the AR shopping system. This study aims to introduce new scalability of the AR technology application in remote shopping.

Recommended citation: Seonji Kim, Hyuckjin Jang, Kyung Taek Oh, Seo Young Oh, Dooyoung Kim, Woontack Woo, Jeongmi Lee, Jaehong Ahn, and Sang Ho Yoon. (2022). "Bring store in my room: Ar store authoring system for spatial experience in mobile shopping." 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct).
Download Paper

CARDS: Comprehensive AR Docent System

Published in 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2022

We present CARDS, a comprehensive Augmented Reality (AR) docent system to promote informative and engaging art experiences for exhibition visitors. While utilizing AR to add aesthetic visual effects has been actively explored, facilitating visitor engagement in educational exhibitions through AR has been relatively unexplored. Thus, the proposed system facilitates visitor engagement in informative exhibitions by providing AR-specific interaction features which include context-based sequential AR pins, position-based AR pin configuration, and orientation-based AR visual aids. With CARDS, we aim to enrich the onsite art experience of visitors by presenting information on exhibits in a systematic way, as well as to enhance visitor engagement by taking the bodily movements of visitors as interaction inputs. Furthermore, we suggest a design guideline for future educational AR guide applications using proxemic interaction methods.

Recommended citation: Seung Un Lee, Jiyoung Yun, Dain Kim, Dooyoung Kim, Seo Young Oh, and Sang Ho Yoon. (2022). "CARDS: Comprehensive AR Docent System." 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct).
Download Paper

AR-HMD Multitask Viewing System Concept with a Supporting Handheld Viewport for Multiple Spatially-Anchored Workspaces

Published in 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2022

We propose a system concept for Augmented Reality Head-Mounted Display users, which supports multitask viewing with multiple vir-tual workspaces anchored in the real-world space. Although people encounter multitasking necessities frequently, the native AR HMD and existing interfaces lack measures to visualize multiple sets of spatially-anchored information in parallel. The system separately vi-sualizes two different sets of spatially-anchored information, one on each AR HMD and smartphone, enabling side-by-side multitasking on AR HMD without applying heavy load on users. We implemented a proof-of-concept prototype that allows side-by-side viewing of the two different virtual workspaces. The proposed concept shows promises of multitasking on AR HMD, and future research will de-velop the system to be fully functional and verified with user studies.

Recommended citation: Seo Young Oh, Boram Yoon, and Woontack Woo. (2022). "AR-HMD Multitask Viewing System Concept with a Supporting Handheld Viewport for Multiple Spatially-Anchored Workspaces." 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct).
Download Paper

Guide Ring: Bidirectional Finger-worn Haptic Actuator for Rich Haptic Feedback

Published in Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology, 2022

We introduce a novel wearable haptic feedback device that magnifies the visual experience of virtual and augmented environments through bidirectional vibrotactile feedback driven by electromagnetic coils with permanent magnets. This device creates guidance haptic effect through magnetic attraction and repulsion. Our proof-of-concept prototype enables haptic interaction through altering position of wearable structure, vibrating with different intensity, and waveform pattern. Example applications illustrate how the proposed system promotes guided and rich haptic feedback.

Recommended citation: Zofia Marciniak, Seo Young Oh, and Sang Ho Yoon. (2022). "Guide Ring: Bidirectional Finger-worn Haptic Actuator for Rich Haptic Feedback." Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology.
Download Paper

Memo: me, an AR Sticky Note With Priority-Based Color Transition and On-Time Reminder

Published in 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2023

We propose Memo:me, an AR sticky note with priority-based color transition and on-time reminders in smartphones. For the priority-based color transition, the user can choose by himself among three different colors, or the system changes the color automatically 10 minutes before the entered time. For the on-time reminder, Memo:me provides a visual notification and a sound at the designated time. We enabled users to create virtual notes on planes, or carry daily objects with the virtual notes attached. We expect that our system would benefit users to manage their tasks in a time-appropriate manner.

Recommended citation: Eunhwa Song, Minju Baeck, Jihyeon Lee, Seo Young Oh, Dooyoung Kim, Woontack Woo, Jeongmi Lee, and Sang Ho Yoon. (2023). "Memo: me, an AR Sticky Note With Priority-Based Color Transition and On-Time Reminder." 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).
Download Paper

OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera

Published in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023

An omni-directional (360°) camera captures the entire viewing sphere surrounding its optical center. Such cameras are growing in use to create highly immersive content and viewing experiences. When such a camera is held by a user, the view includes the user’s hand grip, finger, body pose, face, and the surrounding environment, providing a complete understanding of the visual world and context around it. This capability opens up numerous possibilities for rich mobile input sensing. In OmniSense, we explore the broad input design space for mobile devices with a built-in omni-directional camera and broadly categorize them into three sensing pillars: i) near device ii) around device and iii) surrounding device. In addition we explore potential use cases and applications that leverage these sensing capabilities to solve user needs. Following this, we develop a working system to put these concepts into action, by leveraging these sensing capabilities to enable potential use cases and applications. We studied the system in a technical evaluation and a preliminary user study to gain initial feedback and insights. Collectively these techniques illustrate how a single, omni-purpose sensor on a mobile device affords many compelling ways to enable expressive input, while also affording a broad range of novel applications that improve user experience during mobile interaction.

Recommended citation: Hui-Shyong Yeo, Erwin Wu, Daehwa Kim, Juyoung Lee, Hyung-il Kim, Seo Young Oh, Luna Takagi, Woontack Woo, Hideki Koike, and Aaron John Quigley. (2023). "OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
Download Paper

Effects of Avatar Transparency on Social Presence in Task-centric Mixed Reality Remote Collaboration

Published in IEEE Transactions on Visualization and Computer Graphics, 2023

Despite the importance of avatar representation on user experience for Mixed Reality (MR) remote collaboration involving various device environments and large amounts of task-related information, studies on how controlling visual parameters for avatars can benefit users in such situations have been scarce. Thus, we conducted a user study comparing the effects of three avatars with different transparency levels (Nontransparent, Semi-transparent, and Near-transparent) on social presence for users in Augmented Reality (AR) and Virtual Reality (VR) during task-centric MR remote collaboration. Results show that avatars with a strong visual presence are not required in situations where accomplishing the collaborative task is prioritized over social interaction. However, AR users preferred more vivid avatars than VR users. Based on our findings, we suggest guidelines on how different levels of avatar transparency should be applied based on the context of the task and device type for MR remote collaboration.

Recommended citation: Boram Yoon, Jae-eun Shin, Hyung-il Kim, Seo Young Oh, Dooyoung Kim, and Woontack Woo. (2023). "Effects of Avatar Transparency on Social Presence in Task-centric Mixed Reality Remote Collaboration." IEEE Transactions on Visualization and Computer Graphics.
Download Paper

Visualizing Hand Force with Wearable Muscle Sensing for Enhanced Mixed Reality Remote Collaboration

Published in IEEE Transactions on Visualization and Computer Graphics, 2023

In this paper, we present a prototype system for sharing a user’s hand force in mixed reality (MR) remote collaboration on physical tasks, where hand force is estimated using wearable surface electromyography (sEMG) sensor. In a remote collaboration between a worker and an expert, hand activity plays a crucial role. However, the force exerted by the worker’s hand has not been extensively investigated. Our sEMG-based system reliably captures the worker’s hand force during physical tasks and conveys this information to the expert through hand force visualization, overlaid on the worker’s view or on the worker’s avatar. A user study was conducted to evaluate the impact of visualizing a worker’s hand force on collaboration, employing three distinct visualization methods across two view modes. Our findings demonstrate that sensing and sharing hand force in MR remote collaboration improves the expert’s awareness of the worker’s task, significantly enhances the expert’s perception of the collaborator’s hand force and the weight of the interacting object, and promotes a heightened sense of social presence for the expert. Based on the findings, we provide design implications for future mixed reality remote collaboration systems that incorporate hand force sensing and visualization.

Recommended citation: Hyung-il Kim, Boram Yoon, Seo Young Oh, and Woontack Woo. (2023). "Visualizing Hand Force with Wearable Muscle Sensing for Enhanced Mixed Reality Remote Collaboration." IEEE Transactions on Visualization and Computer Graphics.
Download Paper

Paper Title Number 4

Published in GitHub Journal of Bugs, 2024

This paper is about fixing template issue #693.

Recommended citation: Your Name, You. (2024). "Paper Title Number 3." GitHub Journal of Bugs. 1(3).
Download Paper

Whirling Interface: Hand-based Motion Matching Selection for Small Target on XR Displays

Published in 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2024

We introduce “Whirling Interface,” a selection method for XR displays using bare-hand motion matching gestures as an input technique. We extend the motion matching input method, by introducing different input states to provide visual feedback and guidance to the users. Using the wrist joint as the primary input modality, our technique reduces user fatigue and improves performance while selecting small and distant targets. In a study with 16 participants, we compared the whirling interface with a standard ray casting method using hand gestures. The results demonstrate that the Whirling Interface consistently achieves high success rates, especially for distant targets, averaging 95.58% with a completion time of 5.58 seconds. Notably, it requires a smaller camera sensing field of view of only 21.45° horizontally and 24.7° vertically. Participants reported lower workloads on distant conditions and expressed a higher preference for the Whirling Interface in general. These findings suggest that the Whirling Interface could be a useful alternative input method for XR displays with a small camera sensing FOV or when interacting with small targets.

Recommended citation: Juyoung Lee, Seo Young Oh, Minju Baeck, Hui Shyong Yeo, Hyung-Il Kim, Thad Starner, and Woontack Woo. (2024). "Whirling Interface: Hand-based Motion Matching Selection for Small Target on XR Displays." 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
Download Paper

AReading with Smartphones: Understanding the Trade-offs between Enhanced Legibility and Display Switching Costs in Hybrid AR Interfaces

Published in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 2025

This research investigates the use of hybrid user interfaces to enhance text readability in augmented reality (AR) by combining optical see-through head-mounted displays with smartphones. While this integration can improve information legibility, it may also introduce display switching side effects. The extent to which these side effects hinder user experience and when the benefits outweigh drawbacks remain unclear. To address this gap, we conducted an empirical study (N=24) to evaluate how hybrid user interfaces affect AR reading tasks across different content distances, which induce varying levels of display switching. Our findings show that hybrid user interfaces offer significant readability benefits compared to using the HMD only, reducing mental and physical demands when reading text linked to content at closer distances. However, as the distance between displays increases, the compensatory behaviors users adopt to manage increased switching costs negate these benefits, making hybrid user interfaces less effective. Based on these findings, we suggest (1) using smartphones as supplementary displays for text in reading-intensive tasks, (2) implementing adaptive display positioning to minimize switching overhead in such scenarios, and (3) adjusting the smartphone’s role based on content distance for less intensive reading tasks. These insights provide guidance for optimizing smartphone integration in hybrid interfaces and enhancing AR systems for reading applications.

Recommended citation: Sunyoung Bang, Hyunjin Lee, Seo Young Oh, Woontack Woo. (2025). "AReading with Smartphones: Understanding the Trade-offs between Enhanced Legibility and Display Switching Costs in Hybrid AR Interfaces." Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems.
Download Paper

ForceCtrl: Hand-Raycasting with User-Defined Pinch Force for Control-Display Gain Application

Published in IEEE Transactions on Visualization and Computer Graphics, 2025

We present ForceCtrl, a novel 3D hand raycasting technique that enhances pointing precision by adjusting control-display (CD) gain based on user-defined pinch force. We introduce a target-agnostic approach for refining raycasting precision, overcoming limitations in human motor abilities. User-defined pinch force, detected with surface electromyography (sEMG), enables users to easily activate or deactivate CD gain during interaction. We propose three CD gain strategies and compare them through target selection and placement tasks. Our system reduces selection errors, placement jitters, and user workload, especially for distant targets in high-difficulty tasks. These results highlight the effectiveness of applying CD gain to hand raycasting and demonstrate the potential of user-defined pinch force as a robust input modality for precise hand interaction in AR/VR.

Recommended citation: Seo Young Oh, Junghoon Seo, Juyoung Lee, Boram Yoon, Sang Ho Yoon, and Woontack Woo. (2025). "ForceCtrl: Precision Control of Hand-Raycasting with User-Adaptive Force Input." IEEE Transactions on Visualization and Computer Graphics.
Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.