-
Empathic Computing (EC, Online ISSN 3106-8065) is a peer-reviewed, open-access journal published quarterly by Science Exploration Press. Empathic Computing is a rapidly emerging field of computing concerned with how to create computer systems that better enable people to understand one another and develop empathy. This includes the use of technology such as Augmented Reality (AR) and Virtual Reality (VR) to enable people to see what another person is seeing in real time, and overlay communication cues on their field of view. Also, the use of physiological sensors, machine learning and artificial intelligence to develop systems that can recognize what people are feeling and convey emotional or cognitive state to a collaborator. The goal is to combine natural collaboration, implicit understanding, and experience capture/sharing in a way that transforms collaboration. more >
Articles
Virtual reality-based compassion meditation for clinical contexts: A co-design study of a loving-kindness meditation prototype
-
Aims: This study introduces and evaluates a virtual reality (VR) prototype designed for the Loving-Kindness Meditation (LKM) to support mental health rehabilitation and relaxation in clinical contexts. The aims include the co-creation of a VR-based ...
MoreAims: This study introduces and evaluates a virtual reality (VR) prototype designed for the Loving-Kindness Meditation (LKM) to support mental health rehabilitation and relaxation in clinical contexts. The aims include the co-creation of a VR-based mindfulness experience with clinical experts and the evaluation of its usability, user experience, and short-term effects on relaxation, affect, and self-compassion.
Methods: Following a design thinking and co-creation approach, the VR-based LKM experience was developed iteratively with input from clinicians and computer scientists. The final prototype was implemented for the Meta Quest 3 and included five immersive scenes representing phases of the LKM and transition moments guided by a professional voice recording. Eleven participants (M = 36.5 years,
SD = 14.6) experienced the 12-minute session. Pre- and post-session measures included relaxation, positive and negative affect schedule,self-compassion, and usability, complemented by the Igroup Presence Questionnaire and a semi-structured qualitative interview.Results: Participants reported significant decreases in negative affect (t(10) = -2.512, p = .0307, d = -1.037) and stress (t(10) = -3.318, p = .007, d = -1.328), as well as increases in relaxation (t(10) = 5.487, p < .0001, d = 2.471) and self-compassion (t(10) = 2.231, p = .0497, d = 0.283). Usability was rated as excellent (M = 92.5), and presence as good (M = 4.0, SD = 0.43). Qualitative feedback described the experience as calming, aesthetically pleasing, and easy to engage with, highlighting the falling leaves and pulsating orb as effective design elements.
Conclusion: The co-designed VR-LKM prototype was perceived as highly usable and beneficial for inducing relaxation and self-compassion, suggesting its potential as a supportive tool for clinical mindfulness interventions. The results indicate that immersive VR can effectively facilitate engagement and emotional regulation, providing a foundation for future clinical trials and broader implementation in therapeutic and wellness settings.
Less -
María Alejandra Quiros-Ramírez, ... Stephan Streuber
-
DOI: https://doi.org/10.70401/ec.2025.0014 - December 31, 2025
Conversing with AI agents in VR: An early investigation of alignment and modality
-
Aims: We present an early investigation of how people interact with human-like Artificial Intelligence (AI) agents in virtual reality when discussing ideologically sensitive topics. Specifically, it examines how users respond to AI agents that ...
MoreAims: We present an early investigation of how people interact with human-like Artificial Intelligence (AI) agents in virtual reality when discussing ideologically sensitive topics. Specifically, it examines how users respond to AI agents that express either congruent or incongruent opinions on a controversial issue, and how alignment, modality, and agent behaviors shape perceived conversation quality, psychological comfort, and agent credibility.
Methods: We conducted a 2 (agent opinion: congruent vs. incongruent) × 2 (input modality: text vs. voice) between-subjects experiment with 36 participants who engaged in five-minute virtual reality (VR)-based conversations with a GPT-4-powered AI agent about U.S. gun laws. Participants completed pre- and post-study measures of opinion and emotional states, evaluated the agent, and reflected on the interaction. In addition, dialogue transcripts were analyzed using the Issue-based Information System (IBIS) framework to characterize argument structure and engagement patterns.
Results: Participants engaged willingly with the AI agent regardless of its stance, and qualitative responses suggest that the interactions were generally respectful and characterized by low emotional intensity. Quantitative results show that opinion alignment influenced perceived bias and conversational impact, but did not affect the agent’s competence or likability. While voice input yielded richer dialogue, it also heightened perceived bias. Qualitative findings further highlight participants’ sensitivity to the agent’s ideological stance and their preference for AI agents whose views aligned with their own.
Conclusion: Our study suggests that AI agents embodied in VR can support ideologically challenging conversations without inducing defensiveness or discomfort when designed for neutrality and emotional safety. These findings point to early design directions for conversational agents that scaffold reflection and perspective-taking in politically or ethically sensitive domains.
Less -
Frederik Rueb, Misha Sra
-
DOI: https://doi.org/10.70401/ec.2025.0013 - December 10, 2025
Social resources facilitate pulling actions toward novel social agents than pushing actions in virtual reality
-
Aims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor ...
MoreAims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor reflecting a user’s interest, motivation, and willingness to engage, we analyzed the response time of pulling or pushing inputs, typical actions showing approach-avoidance tendency, via bare-hand interaction in VR. We specifically investigated how the response time varied according to participants’ social resources, particularly the richness of their social lives characterized by broader networks of friends, social groups, and frequent interactions.
Results: Results showed that participants with richer social lives exhibited faster pulling (vs. pushing) actions toward both same- and opposite-sex avatars. These effects remained significant regardless of participants’ gender, age, and prior VR experience. Notably, the observed effects were specific to social stimuli (i.e., avatars) and were not revealed with non-social stimuli (i.e., a flag). Additionally, the effects did not occur with other indirect interactions (i.e., a mouse wheel or a virtual joystick).
Conclusion: The findings suggest that social resources may facilitate approach-oriented bodily affordances in VR environments.
Less -
Jaejoon Jeong, ... Seungwon Kim
-
DOI: https://doi.org/10.70401/ec.2025.0012 - October 24, 2025
Synthetic speech and affective experience in virtual reality: A scoping review
-
Aims: This scoping review systematically maps the existing literature at the intersection of virtual reality (VR), synthetic speech, and affective computing. As immersive and voice-based technologies gain traction in education, mental health, ...
MoreAims: This scoping review systematically maps the existing literature at the intersection of virtual reality (VR), synthetic speech, and affective computing. As immersive and voice-based technologies gain traction in education, mental health, and entertainment, it is critical to understand how synthetic speech shapes emotional experiences in VR environments. The review clarifies how these concepts are defined, how they contribute to empathic computing, and identifies common applications, methodological approaches, research gaps, and ethical considerations.
Methods: A comprehensive search across multiple databases (e.g., ACM Digital Library, IEEE Xplore, ScienceDirect) was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) framework. Eligible studies investigated synthetic or computer-generated speech in VR or comparable immersive 3D settings and assessed emotional responses or related outcomes. Data were extracted on study characteristics, applied technologies, affect-related measures, and reported effects.
Results: The findings reveal a growing interdisciplinary body of research at the convergence of synthetic speech technologies, embodied virtual agents, and affective data processing in immersive environments. Interest in this area has accelerated with the development of advanced text-to-speech models, suggesting increasing relevance and expansion of this topic.
Conclusion: This review underscores a rapidly expanding yet fragmented research landscape. It highlights conceptual and methodological gaps, stressing the need for clearer definitions, standardized evaluation measures, and ethically informed design of synthetic speech in VR. The results provide a foundation for advancing research and applications in emotionally responsive virtual environments.
Less -
Mateusz Dubiel, Jean Botev
-
DOI: https://doi.org/10.70401/ec.2025.0011 - September 29, 2025
Empathic Extended Reality in the Era of Generative AI
-
Aims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary ...
MoreAims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary to move beyond the concept of XR as a unidirectional “empathy machine.” This study proposes a bidirectional “empathy-enabled XR” framework, wherein XR systems not only elicit empathy but also demonstrate empathetic behaviors by sensing, interpreting, and adapting to users’ affective and cognitive states.
Methods: Two complementary frameworks are introduced. The first, the Empathic Large Language Model (EmLLM) framework, integrates multimodal user sensing (e.g., voice, facial expressions, physiological signals, and behavior) with large language models (LLMs) to enable bidirectional empathic communication. The second, the Matrix framework, leverages multimodal user and environmental inputs alongside multimodal LLMs to generate context-aware 3D objects within XR environments. This study presents the design and evaluation of two prototypes based on these frameworks: a physiology-driven EmLLM chatbot for stress management, and a Matrix-based mixed reality (MR) application that dynamically generates everyday 3D objects.
Results: The EmLLM-based chatbot achieved 85% accuracy in stress detection, with participants reporting strong therapeutic alliance scores. In the Matrix framework, the use of a pre-generated 3D model repository significantly reduced graphics processing unit utilization and improved system responsiveness, enabling real-time scene augmentation on resource-constrained XR devices.
Conclusion: By integrating EmLLM and Matrix, this research establishes a foundation for empathy-enabled XR systems that dynamically adapt to users’ needs, affective and cognitive states, and situational contexts through real-time 3D content generation. The findings demonstrate the potential of such systems in diverse applications, including mental health support and collaborative training, thereby opening new avenues for immersive, human-centered XR experiences.
Less -
Poorvesh Dongre, ... Denis Gračanin
-
DOI: https://doi.org/10.70401/ec.2025.0009 - June 29, 2025
A systematic review of using immersive technologies for empathic computing from 2000-2024
-
Aims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic ...
MoreAims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic computing, to identify key research trends, gaps, and future directions.
Methods: The PRISMA methodology was applied using keyword-based searches, publishing venue selection, and citation thresholds to identify 77 papers for detailed review. We analyze these papers to categorize the key areas of empathic computing research, including emotion elicitation, emotion recognition, fostering empathy, and cross-disciplinary applications such as healthcare, learning, entertainment and collaboration.
Results: Our findings reveal that VR has been the dominant platform for empathic computing research over the past two decades, while AR and MR remain underexplored. Dimensional emotional models have influenced this domain more than discrete emotional models for eliciting, recognizing emotions and fostering empathy. Additionally, we identify perception and cognition as pivotal factors influencing user engagement and emotional regulation.
Conclusion: Future research should expand the exploration of AR and MR for empathic computing, refine emotion models by integrating hybrid frameworks, and examine the relationship between lower body postures and emotions in immersive environments as an emerging research opportunity.
Less -
Umme Afifa Jinan, ... Sungchul Jung
-
DOI: https://doi.org/10.70401/ec.2025.0004 - February 25, 2025
Empathic Extended Reality in the Era of Generative AI
-
Aims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary ...
MoreAims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary to move beyond the concept of XR as a unidirectional “empathy machine.” This study proposes a bidirectional “empathy-enabled XR” framework, wherein XR systems not only elicit empathy but also demonstrate empathetic behaviors by sensing, interpreting, and adapting to users’ affective and cognitive states.
Methods: Two complementary frameworks are introduced. The first, the Empathic Large Language Model (EmLLM) framework, integrates multimodal user sensing (e.g., voice, facial expressions, physiological signals, and behavior) with large language models (LLMs) to enable bidirectional empathic communication. The second, the Matrix framework, leverages multimodal user and environmental inputs alongside multimodal LLMs to generate context-aware 3D objects within XR environments. This study presents the design and evaluation of two prototypes based on these frameworks: a physiology-driven EmLLM chatbot for stress management, and a Matrix-based mixed reality (MR) application that dynamically generates everyday 3D objects.
Results: The EmLLM-based chatbot achieved 85% accuracy in stress detection, with participants reporting strong therapeutic alliance scores. In the Matrix framework, the use of a pre-generated 3D model repository significantly reduced graphics processing unit utilization and improved system responsiveness, enabling real-time scene augmentation on resource-constrained XR devices.
Conclusion: By integrating EmLLM and Matrix, this research establishes a foundation for empathy-enabled XR systems that dynamically adapt to users’ needs, affective and cognitive states, and situational contexts through real-time 3D content generation. The findings demonstrate the potential of such systems in diverse applications, including mental health support and collaborative training, thereby opening new avenues for immersive, human-centered XR experiences.
Less -
Poorvesh Dongre, ... Denis Gračanin
-
DOI: https://doi.org/10.70401/ec.2025.0009 - June 29, 2025
Multimodal emotion recognition with disentangled representations: private-shared multimodal variational autoencoder and long short-term memory framework
-
Aims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The ...
MoreAims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The primary objective is to improve the robustness and generalizability of emotion recognition by effectively leveraging the complementary features of electroencephalogram (EEG) signals and facial expression data.
Methods: We first trained a variational autoencoder using a ResNet-101 architecture on a large-scale facial dataset to develop a robust and generalizable facial feature extractor. This pre-trained model was then integrated into the DMMVAE framework, together with a convolutional neural network-based encoder and decoder for EEG data. The DMMVAE model was trained to disentangle shared and modality-specific latent representations across both EEG and facial data. Following this, the outputs of the encoders were concatenated and fed into a LSTM classifier for emotion recognition.
Results: Two sets of experiments were conducted. First, we trained and evaluated our model on the full dataset, comparing its performance with state-of-the-art methods and a baseline LSTM model employing a late fusion strategy to combine EEG and facial features. Second, to assess robustness, we tested the DMMVAE-LSTM framework under data-limited and modality dropout conditions by training with partial data and simulating missing modalities. The results demonstrate that the DMMVAE-LSTM framework consistently outperforms the baseline, especially in scenarios with limited data, indicating its capacity to learn structured and resilient latent representations.
Conclusion: Our findings underscore the benefits of multimodal generative modeling for emotion recognition, particularly in enhancing classification performance when training data are scarce or partially missing. By effectively learning both shared and private representations, DMMVAE-LSTM framework facilitates more reliable emotion classification and presents a promising solution for real-world applications where acquiring large labeled datasets is challenging.
Less -
Behzad Mahaseni, Naimul Mefraz Khan
-
DOI: https://doi.org/10.70401/ec.2025.0010 - June 29, 2025
From theory to practice: virtual children as platforms for research and training
-
This paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling ...
MoreThis paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling the systematic study of caregiver-infant interactions and the development of professional relational skills with young children. We examine their design, features, and applications, as well as their future trajectories, highlighting their potential to advance research and improve training methodologies.
Less -
Samara Morrison, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2025.0005 - March 30, 2025
Building a taxonomy of evidence-based medical eXtended Reality (MXR) applications: towards identifying best practices for design innovation and global collaboration
-
Aims: This article aims to create a taxonomy of evidence-based medical eXtended Reality (MXR) applications, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 360-degree photo/video technologies, to identify best ...
MoreAims: This article aims to create a taxonomy of evidence-based medical eXtended Reality (MXR) applications, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), and 360-degree photo/video technologies, to identify best practices for designing and evaluating user experiences and interfaces (UX/UI). The goal is to assist researchers, developers, and practitioners in comparing and extrapolating the best solutions for high-precision MXR tools in medical and wellness contexts.
Methods: To develop the taxonomy, a review of medical and MXR publications was conducted, followed by three systematic mapping studies. Applications were categorized by end-users and purposes. The first mapping cross-referenced digital health technology classifications. The second validated the structure by incorporating over 350 evidence-based MXR apps, with input from twenty XR-HCI researchers. The third, ongoing mapping adds emerging apps, refining the taxonomy further.
Results: The taxonomy is presented in a dynamic database and 3D interactive graph, allowing international researchers to visualize and discuss developed evidence-based medical and wellness XR applications. This formalizes prior efforts to distinguish validated MXR solutions from speculative ones.
Conclusion: The taxonomy focuses solely on evidence-based applications, highlighting areas where VR, AR, and MR have been successfully implemented. It serves as a tool for stakeholders to analyze and understand best practices in MXR design, promoting the development of safe, effective, and user-friendly medical and wellness applications.
Less -
Jolanda G. Tromp, ... Chung V. Le
-
DOI: https://doi.org/10.70401/ec.2025.0002 - November 30, 2024
A systematic review of using immersive technologies for empathic computing from 2000-2024
-
Aims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic ...
MoreAims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic computing, to identify key research trends, gaps, and future directions.
Methods: The PRISMA methodology was applied using keyword-based searches, publishing venue selection, and citation thresholds to identify 77 papers for detailed review. We analyze these papers to categorize the key areas of empathic computing research, including emotion elicitation, emotion recognition, fostering empathy, and cross-disciplinary applications such as healthcare, learning, entertainment and collaboration.
Results: Our findings reveal that VR has been the dominant platform for empathic computing research over the past two decades, while AR and MR remain underexplored. Dimensional emotional models have influenced this domain more than discrete emotional models for eliciting, recognizing emotions and fostering empathy. Additionally, we identify perception and cognition as pivotal factors influencing user engagement and emotional regulation.
Conclusion: Future research should expand the exploration of AR and MR for empathic computing, refine emotion models by integrating hybrid frameworks, and examine the relationship between lower body postures and emotions in immersive environments as an emerging research opportunity.
Less -
Umme Afifa Jinan, ... Sungchul Jung
-
DOI: https://doi.org/10.70401/ec.2025.0004 - February 25, 2025
Empathic Extended Reality in the Era of Generative AI
-
Aims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary ...
MoreAims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary to move beyond the concept of XR as a unidirectional “empathy machine.” This study proposes a bidirectional “empathy-enabled XR” framework, wherein XR systems not only elicit empathy but also demonstrate empathetic behaviors by sensing, interpreting, and adapting to users’ affective and cognitive states.
Methods: Two complementary frameworks are introduced. The first, the Empathic Large Language Model (EmLLM) framework, integrates multimodal user sensing (e.g., voice, facial expressions, physiological signals, and behavior) with large language models (LLMs) to enable bidirectional empathic communication. The second, the Matrix framework, leverages multimodal user and environmental inputs alongside multimodal LLMs to generate context-aware 3D objects within XR environments. This study presents the design and evaluation of two prototypes based on these frameworks: a physiology-driven EmLLM chatbot for stress management, and a Matrix-based mixed reality (MR) application that dynamically generates everyday 3D objects.
Results: The EmLLM-based chatbot achieved 85% accuracy in stress detection, with participants reporting strong therapeutic alliance scores. In the Matrix framework, the use of a pre-generated 3D model repository significantly reduced graphics processing unit utilization and improved system responsiveness, enabling real-time scene augmentation on resource-constrained XR devices.
Conclusion: By integrating EmLLM and Matrix, this research establishes a foundation for empathy-enabled XR systems that dynamically adapt to users’ needs, affective and cognitive states, and situational contexts through real-time 3D content generation. The findings demonstrate the potential of such systems in diverse applications, including mental health support and collaborative training, thereby opening new avenues for immersive, human-centered XR experiences.
Less -
Poorvesh Dongre, ... Denis Gračanin
-
DOI: https://doi.org/10.70401/ec.2025.0009 - June 29, 2025
Multimodal emotion recognition with disentangled representations: private-shared multimodal variational autoencoder and long short-term memory framework
-
Aims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The ...
MoreAims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The primary objective is to improve the robustness and generalizability of emotion recognition by effectively leveraging the complementary features of electroencephalogram (EEG) signals and facial expression data.
Methods: We first trained a variational autoencoder using a ResNet-101 architecture on a large-scale facial dataset to develop a robust and generalizable facial feature extractor. This pre-trained model was then integrated into the DMMVAE framework, together with a convolutional neural network-based encoder and decoder for EEG data. The DMMVAE model was trained to disentangle shared and modality-specific latent representations across both EEG and facial data. Following this, the outputs of the encoders were concatenated and fed into a LSTM classifier for emotion recognition.
Results: Two sets of experiments were conducted. First, we trained and evaluated our model on the full dataset, comparing its performance with state-of-the-art methods and a baseline LSTM model employing a late fusion strategy to combine EEG and facial features. Second, to assess robustness, we tested the DMMVAE-LSTM framework under data-limited and modality dropout conditions by training with partial data and simulating missing modalities. The results demonstrate that the DMMVAE-LSTM framework consistently outperforms the baseline, especially in scenarios with limited data, indicating its capacity to learn structured and resilient latent representations.
Conclusion: Our findings underscore the benefits of multimodal generative modeling for emotion recognition, particularly in enhancing classification performance when training data are scarce or partially missing. By effectively learning both shared and private representations, DMMVAE-LSTM framework facilitates more reliable emotion classification and presents a promising solution for real-world applications where acquiring large labeled datasets is challenging.
Less -
Behzad Mahaseni, Naimul Mefraz Khan
-
DOI: https://doi.org/10.70401/ec.2025.0010 - June 29, 2025
From theory to practice: virtual children as platforms for research and training
-
This paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling ...
MoreThis paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling the systematic study of caregiver-infant interactions and the development of professional relational skills with young children. We examine their design, features, and applications, as well as their future trajectories, highlighting their potential to advance research and improve training methodologies.
Less -
Samara Morrison, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2025.0005 - March 30, 2025
Social resources facilitate pulling actions toward novel social agents than pushing actions in virtual reality
-
Aims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor ...
MoreAims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor reflecting a user’s interest, motivation, and willingness to engage, we analyzed the response time of pulling or pushing inputs, typical actions showing approach-avoidance tendency, via bare-hand interaction in VR. We specifically investigated how the response time varied according to participants’ social resources, particularly the richness of their social lives characterized by broader networks of friends, social groups, and frequent interactions.
Results: Results showed that participants with richer social lives exhibited faster pulling (vs. pushing) actions toward both same- and opposite-sex avatars. These effects remained significant regardless of participants’ gender, age, and prior VR experience. Notably, the observed effects were specific to social stimuli (i.e., avatars) and were not revealed with non-social stimuli (i.e., a flag). Additionally, the effects did not occur with other indirect interactions (i.e., a mouse wheel or a virtual joystick).
Conclusion: The findings suggest that social resources may facilitate approach-oriented bodily affordances in VR environments.
Less -
Jaejoon Jeong, ... Seungwon Kim
-
DOI: https://doi.org/10.70401/ec.2025.0012 - October 24, 2025
Frontier Forums
Special Issues
Adaptive Empathic Interactive Media for Therapy
-
Submission Deadline: 20 Mar 2026
-
Published articles: 0
Co-creation for accessible computing through advances in emerging technologies
-
Submission Deadline: 21 Oct 2025
-
Published articles: 0


