-
Empathic Computing (EC, Online ISSN 3106-8065) is a peer-reviewed, open-access journal published quarterly by Science Exploration Press. Empathic Computing is a rapidly emerging field of computing concerned with how to create computer systems that better enable people to understand one another and develop empathy. This includes the use of technology such as Augmented Reality (AR) and Virtual Reality (VR) to enable people to see what another person is seeing in real time, and overlay communication cues on their field of view. Also, the use of physiological sensors, machine learning and artificial intelligence to develop systems that can recognize what people are feeling and convey emotional or cognitive state to a collaborator. The goal is to combine natural collaboration, implicit understanding, and experience capture/sharing in a way that transforms collaboration. more >
Articles
Improvement of a BCI-enabled Boccia ramp through a patient engagement strategy
-
Aims: The right to play is a basic human right. However, sport participation is often limited for children with complex motor disabilities such as quadriplegic cerebral palsy. Brain-computer interface systems (BCIs) translate a user’s brain waves ...
MoreAims: The right to play is a basic human right. However, sport participation is often limited for children with complex motor disabilities such as quadriplegic cerebral palsy. Brain-computer interface systems (BCIs) translate a user’s brain waves into instructions that can be used to control external devices. We have developed a BCI-enabled Boccia system that allows children who cannot move on their own to play independently. Boccia is a Paralympic sport that offers inclusion for individuals with limited mobility, it does not fully accommodate those with severe motor disabilities and communication difficulties. The purpose of this study was to partner with persons with lived experience (PWLE) to improve the Boccia system with the expertise of patients and Boccia Paralympic athletes.
Methods: Following the Strategy for Patient Oriented Research (SPOR) framework, we engaged seven PWLE to participate in two virtual sessions and one in-person session to improve the BCI-Boccia system. After the first virtual session, the engineering team translated the feedback and comments from the PWLE into a list of tasks to develop the new features. The software development approach used an Agile development strategy, after which the second session consisted of a demonstration to gather additional feedback for refinement. After the hardware was developed, two PWLE attended in-person sessions to use the system and provide additional feedback. Engagement was evaluated using the Public and Patient Engagement Evaluation Tool (PPEET).
Results: Comments from the first virtual meeting focused on improving the software controller of the ramp, as well as the mechanical stability of the ramp. A new software controller was designed. The coarse-movement controller is used for initial control to give the player a broad range of selection options. To refine the precision of the shot, a fine-movement controller is provided. From these two visual control schemes, both ball elevation and ramp rotation can be chosen with a single selection. Similarly, a new base for the Boccia ramp was developed to allow for improved stability and faster assembly and disassembly procedures. The results of the PPEET confirmed that the PWLE perceived that their suggestions were taken into account and that the necessary resources were provided for their participation.
Conclusion: We demonstrate that a patient engagement strategy can inform and facilitate improvements to a BCI-enabled Boccia system. The resulting system supports the play of both recreational and competitive users by having a simplified controller and improved hardware. Involving diverse PWLE throughout the design cycle may improve accessibility and user adoption in disability sports. Inclusive participation likely helps ensure that improvement efforts directly address the needs of end users.
Less -
Daniel Comaduran Marquez, ... Adam Kirton
-
DOI: https://doi.org/10.70401/ec.2026.0020 - March 27, 2026
-
This article belongs to the Special Issue Co-creation for accessible computing through advances in emerging technologies
Emotion recognition in virtual reality: Toward adaptive and responsible technologies
-
Affective Computing aims to build systems that can sense, interpret, and influence human emotions. In parallel, virtual reality (VR) has matured into a powerful solution for immersion, presence, and interaction. When combined, these two technologies enable ...
MoreAffective Computing aims to build systems that can sense, interpret, and influence human emotions. In parallel, virtual reality (VR) has matured into a powerful solution for immersion, presence, and interaction. When combined, these two technologies enable a new generation of emotionally adaptive systems that can recognize users’ inner states and react in real time. In this article, we explore the opportunities and challenges of emotion recognition and emotion elicitation in VR, focusing on the Virtual Emotion, Elicitation and Recognition Loop, a closed-loop framework where virtual environments adapt based on recognized and targeted emotions. We discuss applications in manufacturing, product design, and education, and highlight key ethical and privacy concerns, especially in light of recent regulations such as the European Commission released the proposed Regulation on Artificial Intelligence (EU AI Act). Finally, we outline open research directions toward responsible, privacy-aware, and scalable affective VR systems, suggesting that the convergence between VR and Affective Computing can become a crucial foundation for the next generation of human-centric interactive technologies.
Less -
Davide Andreoletti, ... Silvia Giordano
-
DOI: https://doi.org/10.70401/ec.2026.0019 - March 26, 2026
Reframing human–robot interaction through extended reality: Unlocking safer, smarter, and more empathic interactions with virtual robots and foundation models
-
This perspective reframes human–robot interaction (HRI) through extended reality (XR), arguing that virtual robots powered by large foundation models (FMs) can serve as cognitively grounded, empathic agents. Unlike physical robots, XR-native agents ...
MoreThis perspective reframes human–robot interaction (HRI) through extended reality (XR), arguing that virtual robots powered by large foundation models (FMs) can serve as cognitively grounded, empathic agents. Unlike physical robots, XR-native agents are unbound by hardware constraints and can be instantiated, adapted, and scaled on demand, while still affording embodiment and co-presence. We synthesize work across XR, HRI, and cognitive AI to show how such agents can support safety-critical scenarios, socially and cognitively empathic interaction across domains, and extend physical capabilities with XR and AI integration. We then discuss how multimodal large FMs (e.g., large language models, large vision models, and vision-language models) enable context-aware reasoning, affect-sensitive situations, and long-term adaptation, positioning virtual robots as cognitive and empathic mediators rather than mere simulation assets. At the same time, we highlight challenges and potential risks, including overtrust, cultural and representational bias, privacy concerns around biometric sensing, and data governance and transparency. The paper concludes by outlining a research agenda for human-centered, ethically grounded XR agents, emphasizing multi-layered evaluation frameworks, multi-user ecosystems, mixed virtual–physical embodiment, and societal and ethical design practices to envision XR-based virtual agents powered by FMs as reshaping future HRI into a more efficient and adaptive paradigm.
Less -
Yuchong Zhang, ... Danica Kragic
-
DOI: https://doi.org/10.70401/ec.2026.0018 - February 13, 2026
Virtual humans’ facial expressions, gestures, and voices impact user empathy
-
Aims: Users’ empathy towards artificial agents can be influenced by the agent’s expression of emotion. To date, most studies have used a Wizard of Oz design or manually programmed agents’ expressions. This study investigated whether autonomously ...
MoreAims: Users’ empathy towards artificial agents can be influenced by the agent’s expression of emotion. To date, most studies have used a Wizard of Oz design or manually programmed agents’ expressions. This study investigated whether autonomously animated emotional expression and neural voices could increase user empathy towards a Virtual Human.
Methods: 158 adults participated in an online experiment, where they watched videos of six emotional stories generated by ChatGPT. For each story, participants were randomly assigned to a virtual human (VH) called Carina, telling the story with either (1) autonomous expressive or non-expressive animation, and (2) a neural or standard text-to-speech (TTS) voice. After each story, participants rated how well the animation and voice matched the story, and their cognitive, affective, and subjective empathy towards Carina were evaluated. Qualitative data were collected on how well participants thought Carina expressed emotion.
Results: Autonomous emotional expression enhanced the alignment between the animation and voice, and improved subjective, cognitive, and affective empathy. The standard voice was rated as matching the fear and sad stories better, the sad animation was better, creating greater subjective and cognitive empathy for the sad story. Trait empathy and ratings of how well the animation and voice matched the story predicted subjective empathy. Qualitative analysis revealed that the animation conveyed emotions more effectively than the voice, and emotional expression was associated with increased empathy.
Conclusion: Autonomous emotional animation of VHs can improve empathy towards AI-generated stories. Further research is needed on voices that can dynamically change to express different emotions.
Less -
Elizabeth Broadbent, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2026.0017 - January 26, 2026
Making robots understandable: Augmented reality for enhancing situational awareness in human–robot co-located environments
-
Aims: Sharing a robot’s intentions is crucial for building human confidence and ensuring safety in robot co-located environments. Communicating planned motion or internal state of a robot in a clear and timely manner is challenging, especially when ...
MoreAims: Sharing a robot’s intentions is crucial for building human confidence and ensuring safety in robot co-located environments. Communicating planned motion or internal state of a robot in a clear and timely manner is challenging, especially when users are occupied with other tasks. Augmented reality (AR) offers an effective medium for delivering visual cues that convey such information. This study evaluates a smartphone-based AR interface designed to communicate a robot’s navigation intentions and enhance users’ situational awareness (SA) in shared human–robot settings.
Methods: We developed a mobile AR application using Unity3D and Google ARCore to display goal locations, planned trajectories, and the real-time motion of a Robot Operating System enabled mobile robot. The system provides three visualization modes: Goal Visible, Path Visible, and Robot Visible, with spatial alignment achieved via a reference image marker. Fifty-eight participants, with varying levels of experience in robotics and AR, took part in an online user study. SA was assessed using a query-based evaluation adapted from the Situational Awareness Global Assessment Technique (SAGAT), examining perception, comprehension, and projection levels. A post-assessment survey captured user opinions on usability and perceived benefits.
Results: Participants achieved an average SAGAT score of 86.5%, indicating improved awareness of the robot mission, spatial positioning, and safe zones. AR visualization was particularly effective for identifying obstacles and predicting unobstructed areas. In the post-assessment survey, over 96.6% of participants agreed that the interface enhanced their confidence and understanding of robot motion intentions.
Conclusion: A mobile AR interface can significantly enhance SA in shared human–robot environments by making robot intentions more transparent and comprehensible. Future work will include in-situ evaluations with physical robots, integration of richer robot-state information such as velocity and sensor data, and the exploration of additional visualization strategies that further strengthen safety, predictability, and trust in human–robot collaborative environments.
Less -
Sonia Chacko, Vikram Kapila
-
DOI: https://doi.org/10.70401/ec.2026.0016 - January 26, 2026
A systematic review of using immersive technologies for empathic computing from 2000-2024
-
Aims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic ...
MoreAims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic computing, to identify key research trends, gaps, and future directions.
Methods: The PRISMA methodology was applied using keyword-based searches, publishing venue selection, and citation thresholds to identify 77 papers for detailed review. We analyze these papers to categorize the key areas of empathic computing research, including emotion elicitation, emotion recognition, fostering empathy, and cross-disciplinary applications such as healthcare, learning, entertainment and collaboration.
Results: Our findings reveal that VR has been the dominant platform for empathic computing research over the past two decades, while AR and MR remain underexplored. Dimensional emotional models have influenced this domain more than discrete emotional models for eliciting, recognizing emotions and fostering empathy. Additionally, we identify perception and cognition as pivotal factors influencing user engagement and emotional regulation.
Conclusion: Future research should expand the exploration of AR and MR for empathic computing, refine emotion models by integrating hybrid frameworks, and examine the relationship between lower body postures and emotions in immersive environments as an emerging research opportunity.
Less -
Umme Afifa Jinan, ... Sungchul Jung
-
DOI: https://doi.org/10.70401/ec.2025.0004 - February 25, 2025
Empathic Extended Reality in the Era of Generative AI
-
Aims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary ...
MoreAims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary to move beyond the concept of XR as a unidirectional “empathy machine.” This study proposes a bidirectional “empathy-enabled XR” framework, wherein XR systems not only elicit empathy but also demonstrate empathetic behaviors by sensing, interpreting, and adapting to users’ affective and cognitive states.
Methods: Two complementary frameworks are introduced. The first, the Empathic Large Language Model (EmLLM) framework, integrates multimodal user sensing (e.g., voice, facial expressions, physiological signals, and behavior) with large language models (LLMs) to enable bidirectional empathic communication. The second, the Matrix framework, leverages multimodal user and environmental inputs alongside multimodal LLMs to generate context-aware 3D objects within XR environments. This study presents the design and evaluation of two prototypes based on these frameworks: a physiology-driven EmLLM chatbot for stress management, and a Matrix-based mixed reality (MR) application that dynamically generates everyday 3D objects.
Results: The EmLLM-based chatbot achieved 85% accuracy in stress detection, with participants reporting strong therapeutic alliance scores. In the Matrix framework, the use of a pre-generated 3D model repository significantly reduced graphics processing unit utilization and improved system responsiveness, enabling real-time scene augmentation on resource-constrained XR devices.
Conclusion: By integrating EmLLM and Matrix, this research establishes a foundation for empathy-enabled XR systems that dynamically adapt to users’ needs, affective and cognitive states, and situational contexts through real-time 3D content generation. The findings demonstrate the potential of such systems in diverse applications, including mental health support and collaborative training, thereby opening new avenues for immersive, human-centered XR experiences.
Less -
Poorvesh Dongre, ... Denis Gračanin
-
DOI: https://doi.org/10.70401/ec.2025.0009 - June 29, 2025
Multimodal emotion recognition with disentangled representations: private-shared multimodal variational autoencoder and long short-term memory framework
-
Aims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The ...
MoreAims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The primary objective is to improve the robustness and generalizability of emotion recognition by effectively leveraging the complementary features of electroencephalogram (EEG) signals and facial expression data.
Methods: We first trained a variational autoencoder using a ResNet-101 architecture on a large-scale facial dataset to develop a robust and generalizable facial feature extractor. This pre-trained model was then integrated into the DMMVAE framework, together with a convolutional neural network-based encoder and decoder for EEG data. The DMMVAE model was trained to disentangle shared and modality-specific latent representations across both EEG and facial data. Following this, the outputs of the encoders were concatenated and fed into a LSTM classifier for emotion recognition.
Results: Two sets of experiments were conducted. First, we trained and evaluated our model on the full dataset, comparing its performance with state-of-the-art methods and a baseline LSTM model employing a late fusion strategy to combine EEG and facial features. Second, to assess robustness, we tested the DMMVAE-LSTM framework under data-limited and modality dropout conditions by training with partial data and simulating missing modalities. The results demonstrate that the DMMVAE-LSTM framework consistently outperforms the baseline, especially in scenarios with limited data, indicating its capacity to learn structured and resilient latent representations.
Conclusion: Our findings underscore the benefits of multimodal generative modeling for emotion recognition, particularly in enhancing classification performance when training data are scarce or partially missing. By effectively learning both shared and private representations, DMMVAE-LSTM framework facilitates more reliable emotion classification and presents a promising solution for real-world applications where acquiring large labeled datasets is challenging.
Less -
Behzad Mahaseni, Naimul Mefraz Khan
-
DOI: https://doi.org/10.70401/ec.2025.0010 - June 29, 2025
From theory to practice: virtual children as platforms for research and training
-
This paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling ...
MoreThis paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling the systematic study of caregiver-infant interactions and the development of professional relational skills with young children. We examine their design, features, and applications, as well as their future trajectories, highlighting their potential to advance research and improve training methodologies.
Less -
Samara Morrison, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2025.0005 - March 30, 2025
Creating safe environments for children: prevention of trauma in the Extended Verse
-
In the evolving landscape of digital childhood, ensuring safe environments within the Extended Verse (XV) is essential for preventing trauma and fostering positive experiences. This paper proposes a conceptual framework for the integration of advanced ...
MoreIn the evolving landscape of digital childhood, ensuring safe environments within the Extended Verse (XV) is essential for preventing trauma and fostering positive experiences. This paper proposes a conceptual framework for the integration of advanced emotion recognition systems and physiological sensors with virtual and augmented reality technologies to create secure spaces for children. The author presents a theoretical architecture and data flow design that could enable future systems to perform real-time monitoring and interpretation of emotional and physiological responses. This design architecture lays the groundwork for future research and development of adaptive, empathetic interfaces capable of responding to distress signals and mitigating trauma. The paper addresses current challenges, proposes innovative solutions, and outlines an evaluation framework to support an empathic, secure and nurturing virtual environment for young users.
Less -
Nina Jane Patel
-
DOI: https://doi.org/10.70401/ec.2025.0003 - February 17, 2025
A systematic review of using immersive technologies for empathic computing from 2000-2024
-
Aims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic ...
MoreAims: To give a comprehensive understanding of current research on immersive empathic computing, this paper aims to present a systematic review of the use of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) technologies in empathic computing, to identify key research trends, gaps, and future directions.
Methods: The PRISMA methodology was applied using keyword-based searches, publishing venue selection, and citation thresholds to identify 77 papers for detailed review. We analyze these papers to categorize the key areas of empathic computing research, including emotion elicitation, emotion recognition, fostering empathy, and cross-disciplinary applications such as healthcare, learning, entertainment and collaboration.
Results: Our findings reveal that VR has been the dominant platform for empathic computing research over the past two decades, while AR and MR remain underexplored. Dimensional emotional models have influenced this domain more than discrete emotional models for eliciting, recognizing emotions and fostering empathy. Additionally, we identify perception and cognition as pivotal factors influencing user engagement and emotional regulation.
Conclusion: Future research should expand the exploration of AR and MR for empathic computing, refine emotion models by integrating hybrid frameworks, and examine the relationship between lower body postures and emotions in immersive environments as an emerging research opportunity.
Less -
Umme Afifa Jinan, ... Sungchul Jung
-
DOI: https://doi.org/10.70401/ec.2025.0004 - February 25, 2025
Empathic Extended Reality in the Era of Generative AI
-
Aims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary ...
MoreAims: Extended reality (XR) has been widely recognized for its ability to evoke empathetic responses by immersing users in virtual scenarios and promoting perspective-taking. However, to fully realize the empathic potential of XR, it is necessary to move beyond the concept of XR as a unidirectional “empathy machine.” This study proposes a bidirectional “empathy-enabled XR” framework, wherein XR systems not only elicit empathy but also demonstrate empathetic behaviors by sensing, interpreting, and adapting to users’ affective and cognitive states.
Methods: Two complementary frameworks are introduced. The first, the Empathic Large Language Model (EmLLM) framework, integrates multimodal user sensing (e.g., voice, facial expressions, physiological signals, and behavior) with large language models (LLMs) to enable bidirectional empathic communication. The second, the Matrix framework, leverages multimodal user and environmental inputs alongside multimodal LLMs to generate context-aware 3D objects within XR environments. This study presents the design and evaluation of two prototypes based on these frameworks: a physiology-driven EmLLM chatbot for stress management, and a Matrix-based mixed reality (MR) application that dynamically generates everyday 3D objects.
Results: The EmLLM-based chatbot achieved 85% accuracy in stress detection, with participants reporting strong therapeutic alliance scores. In the Matrix framework, the use of a pre-generated 3D model repository significantly reduced graphics processing unit utilization and improved system responsiveness, enabling real-time scene augmentation on resource-constrained XR devices.
Conclusion: By integrating EmLLM and Matrix, this research establishes a foundation for empathy-enabled XR systems that dynamically adapt to users’ needs, affective and cognitive states, and situational contexts through real-time 3D content generation. The findings demonstrate the potential of such systems in diverse applications, including mental health support and collaborative training, thereby opening new avenues for immersive, human-centered XR experiences.
Less -
Poorvesh Dongre, ... Denis Gračanin
-
DOI: https://doi.org/10.70401/ec.2025.0009 - June 29, 2025
Multimodal emotion recognition with disentangled representations: private-shared multimodal variational autoencoder and long short-term memory framework
-
Aims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The ...
MoreAims: This study proposes a multimodal emotion recognition framework that combines a private-shared disentangled multimodal variational autoencoder (DMMVAE) with a long short-term memory (LSTM) network, herein referred to as DMMVAE-LSTM. The primary objective is to improve the robustness and generalizability of emotion recognition by effectively leveraging the complementary features of electroencephalogram (EEG) signals and facial expression data.
Methods: We first trained a variational autoencoder using a ResNet-101 architecture on a large-scale facial dataset to develop a robust and generalizable facial feature extractor. This pre-trained model was then integrated into the DMMVAE framework, together with a convolutional neural network-based encoder and decoder for EEG data. The DMMVAE model was trained to disentangle shared and modality-specific latent representations across both EEG and facial data. Following this, the outputs of the encoders were concatenated and fed into a LSTM classifier for emotion recognition.
Results: Two sets of experiments were conducted. First, we trained and evaluated our model on the full dataset, comparing its performance with state-of-the-art methods and a baseline LSTM model employing a late fusion strategy to combine EEG and facial features. Second, to assess robustness, we tested the DMMVAE-LSTM framework under data-limited and modality dropout conditions by training with partial data and simulating missing modalities. The results demonstrate that the DMMVAE-LSTM framework consistently outperforms the baseline, especially in scenarios with limited data, indicating its capacity to learn structured and resilient latent representations.
Conclusion: Our findings underscore the benefits of multimodal generative modeling for emotion recognition, particularly in enhancing classification performance when training data are scarce or partially missing. By effectively learning both shared and private representations, DMMVAE-LSTM framework facilitates more reliable emotion classification and presents a promising solution for real-world applications where acquiring large labeled datasets is challenging.
Less -
Behzad Mahaseni, Naimul Mefraz Khan
-
DOI: https://doi.org/10.70401/ec.2025.0010 - June 29, 2025
From theory to practice: virtual children as platforms for research and training
-
This paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling ...
MoreThis paper explores two virtual child simulators, BabyX and the VR Baby Training Tool, which provide immersive, interactive platforms for child-focused research and training. These technologies address key ethical and practical constraints, enabling the systematic study of caregiver-infant interactions and the development of professional relational skills with young children. We examine their design, features, and applications, as well as their future trajectories, highlighting their potential to advance research and improve training methodologies.
Less -
Samara Morrison, ... Mark Sagar
-
DOI: https://doi.org/10.70401/ec.2025.0005 - March 30, 2025
Social resources facilitate pulling actions toward novel social agents than pushing actions in virtual reality
-
Aims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor ...
MoreAims: This study examined the speed of approach-avoidance actions in virtual reality (VR) as an indicator of psychological “readiness” to interact with social avatars.
Methods: Given that fast response is a key psychological factor reflecting a user’s interest, motivation, and willingness to engage, we analyzed the response time of pulling or pushing inputs, typical actions showing approach-avoidance tendency, via bare-hand interaction in VR. We specifically investigated how the response time varied according to participants’ social resources, particularly the richness of their social lives characterized by broader networks of friends, social groups, and frequent interactions.
Results: Results showed that participants with richer social lives exhibited faster pulling (vs. pushing) actions toward both same- and opposite-sex avatars. These effects remained significant regardless of participants’ gender, age, and prior VR experience. Notably, the observed effects were specific to social stimuli (i.e., avatars) and were not revealed with non-social stimuli (i.e., a flag). Additionally, the effects did not occur with other indirect interactions (i.e., a mouse wheel or a virtual joystick).
Conclusion: The findings suggest that social resources may facilitate approach-oriented bodily affordances in VR environments.
Less -
Jaejoon Jeong, ... Seungwon Kim
-
DOI: https://doi.org/10.70401/ec.2025.0012 - October 24, 2025
Frontier Forums
Special Issues
Adaptive Empathic Interactive Media for Therapy
-
Submission Deadline: 20 Mar 2026
-
Published articles: 0
Co-creation for accessible computing through advances in emerging technologies
-
Submission Deadline: 21 Oct 2025
-
Published articles: 1


