Academic literature on the topic 'AI-Enhanced Mixed Reality'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'AI-Enhanced Mixed Reality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "AI-Enhanced Mixed Reality"

1

Dr.N.Kala and Premanand Narasimhan. "Cyber Attacks in Extended Reality (XR) Using AI: Malicious Programs and Mitigation Strategies." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 1886–89. https://doi.org/10.32628/cseit241061237.

Full text
Abstract:
The convergence of artificial intelligence (AI) and extended reality (XR) technologies, such as virtual reality (VR), augmented reality (AR), and mixed reality (MR), has introduced new dimensions of immersive experiences. However, the combination of AI and XR has also created fertile ground for cybercriminals to exploit these advanced platforms. This paper explores the types of AI-driven malicious programs targeting XR systems, including AI-enhanced phishing, malware, deepfake avatars, ransomware, and behavioral manipulation. Additionally, it outlines strategies to mitigate these AI-driven cyberattacks, such as advanced threat detection, secure authentication mechanisms, regular security audits, user education, and AI-enhanced security solutions. As XR becomes increasingly integrated into daily life, understanding and addressing AI-driven cyber threats is crucial to ensure secure virtual environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Researcher. "AI-ENHANCED MIXED REALITY IN EDUCATION: A SYSTEMATIC ANALYSIS OF IMMERSIVE LEARNING TECHNOLOGIES AND STUDENT DEVELOPMENT OUTCOMES." International Journal of Research In Computer Applications and Information Technology (IJRCAIT) 7, no. 2 (2024): 1252–64. https://doi.org/10.5281/zenodo.14169846.

Full text
Abstract:
The integration of Artificial Intelligence (AI) with Mixed Reality (MR) technologies represents a significant advancement in educational technology, offering unprecedented opportunities for personalized and immersive learning experiences.   This article examines the implementation and effectiveness of AI-enhanced MR systems across diverse educational settings (n=245) through a mixed-methods approach, incorporating quantitative analysis of student performance metrics and qualitative assessment of learner engagement. The findings demonstrate a statistically significant improvement in student learning outcomes (p < 0.001) when utilizing AI-enhanced MR, with particular effectiveness in STEM subjects showing a 27% increase in conceptual understanding compared to traditional methods. The article identifies key success factors including adaptive learning algorithms, real-time feedback mechanisms, and immersive visualization capabilities, while also addressing critical challenges such as infrastructure requirements, accessibility concerns, and pedagogical integration. Analysis of longitudinal data collected over two academic years reveals sustained improvements in student engagement (Cohen's d = 0.82) and knowledge retention (Cohen's d = 0.76). Furthermore, the article presents a novel framework for implementing AI-enhanced MR in educational institutions, incorporating considerations for scalability, data privacy, and teacher professional development. These findings have significant implications for educational policy, pedagogical practice, and the future development of immersive learning technologies in increasingly digitized educational environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Vinothkumar, Kolluru. "Surgical Data Science and Associated Techniques Facilitate the Development of Contemporary Equipment like Apple's Vision Pro." International Journal of Preventive Medicine and Health (IJPMH) 5, no. 1 (2024): 1–9. https://doi.org/10.54105/ijpmh.D3648.0501124.

Full text
Abstract:
<strong>Abstract: </strong>Artificial Intelligence (AI) has revolutionized modern surgery by enhancing every stage of patient care, from preoperative planning to postoperative monitoring. This paper explores the impact of AI in conjunction with other technologies in surgical procedures, emphasizing their empirical basis and integration into clinical practice. AI's role in facilitating personalized treatment planning through a comprehensive analysis of patient data and imaging studies, utilizing techniques like natural language processing (NLP) to extract critical insights, reassures us of its positive impact on patient care. Real-time decision support systems powered by AI improve surgical precision, enabling surgeons to navigate complex procedures with enhanced accuracy and efficiency. Furthermore, AI-driven surgical robotics exemplify the precision achievable with these technologies, enabling minimally invasive procedures that minimize patient trauma and expedite recovery. Integrating AI with computer vision further enhances surgical capabilities by allowing machines to interpret visual data autonomously, like human perception. Convolutional Neural Networks (CNNs) are pivotal in image recognition and analysis, supporting tasks from anatomical landmark identification to surgical planning. Augmented Reality (AR), when combined with AI, enriches surgical practice by overlaying digital information onto real-world views, aiding in intraoperative guidance and educational training. Devices like Apple's Vision Pro (AVP) headset showcase the potential of mixed reality technologies in enhancing surgical precision. AVP's integration of spatial computing and AI algorithms allows for real-time data analysis and decision support, transforming surgical education and procedural outcomes. Despite the transformative potential, challenges, including ethical considerations, data privacy, and regulatory frameworks, must be addressed to ensure the responsible deployment of AI in surgical settings. These challenges include mitigating biases in AI algorithms and ensuring equitable access to advanced technologies across diverse surgical specialties. The dynamic nature of AI in surgery necessitates continued research and development to refine AI applications, optimize surgical workflows, and improve patient outcomes globally. In combination with contemporary technologies, AI represents a paradigm shift in surgical practice, offering unprecedented opportunities to enhance patient care through personalized, precise, and efficient interventions. AI's ongoing evolution and integration in surgery promise to reshape healthcare's future, advancing clinical practice and medical education toward safer, more effective, and inclusive healthcare delivery systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Si, Haocong Cheng, Suzy Su, et al. "Customizing Generated Signs and Voices of AI Avatars: Deaf-Centric Mixed-Reality Design for Deaf-Hearing Communication." Proceedings of the ACM on Human-Computer Interaction 9, no. 2 (2025): 1–31. https://doi.org/10.1145/3710953.

Full text
Abstract:
This study investigates innovative interaction designs for communication and collaborative learning between learners of mixed hearing and signing abilities, leveraging advancements in mixed reality technologies like Apple Vision Pro and generative AI for animated avatars. Adopting a participatory design approach, we engaged 15 d/Deaf and hard of hearing (DHH) students to brainstorm ideas for an AI avatar with interpreting ability (sign language to English and English to sign language) that would facilitate their face-to-face communication with hearing peers. Participants envisioned the AI avatars to address some issues with human interpreters, such as lack of availability, and provide affordable options to expensive personalized interpreting services. Our findings indicate a range of preferences for integrating the AI avatars with actual human figures of both DHH and hearing communication partners. The participants highlighted the importance of having control over customizing the AI avatar, such as AI-generated signs, voices, facial expressions, and their synchronization for enhanced emotional display in communication. Based on our findings, we propose a suite of design recommendations that balance respecting sign language norms with adherence to hearing social norms. Our study offers insights into improving the authenticity of generative AI in scenarios involving specific and sometimes unfamiliar social norms.
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Ruonan. "Integrating computer vision and AI for interactive augmented reality experiences in new media." Applied and Computational Engineering 102, no. 1 (2024): 49–54. http://dx.doi.org/10.54254/2755-2721/102/20241002.

Full text
Abstract:
Abstract. Augmented reality (AR) is a groundbreaking technology that fully immerse the user in a mixed 'reality' where the real and virtual coexist in a unique manner. Integrating more artificial intelligence (AI) and computer vision into AR devices can greatly improve user input and provide a whole novel interface. Through AI technology, such as gesture recognition, object tracking, and face recognition, AR systems can offer more intuitive and engaging interactions. An AR system featuring such improved AI technologies can process real-time data while having the contextual awareness to respond to users' input and the surrounding environment on the fly. It can also provide narrative integration, character development, and dynamic environments to users, thereby enabling them to have a more personalised and meaningful experience. This paper examines the evolution of new media by discussing the possibilities of using AI and computer vision in AR devices to create personalised experiences, all the while critically looking at technical challenges and the opportunities they present to the field of future research and development. It also looks at case studies across different sectors, such as education, training, tourism, gaming, retail, and aviation to justify the potential of future development of AI-enhanced AR.
APA, Harvard, Vancouver, ISO, and other styles
6

Hancko, Dusan, Andrea Majlingova, and Danica Kačíková. "Integrating Virtual Reality, Augmented Reality, Mixed Reality, Extended Reality, and Simulation-Based Systems into Fire and Rescue Service Training: Current Practices and Future Directions." Fire 8, no. 6 (2025): 228. https://doi.org/10.3390/fire8060228.

Full text
Abstract:
The growing complexity and risk profile of fire and emergency incidents necessitate advanced training methodologies that go beyond traditional approaches. Live-fire drills and classroom-based instruction, while foundational, often fall short in providing safe, repeatable, and scalable training environments that accurately reflect the dynamic nature of real-world emergencies. Recent advancements in immersive technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), extended reality (XR), and simulation-based systems, offer promising alternatives to address these challenges. This review provides a comprehensive overview of the integration of VR, AR, MR, XR, and simulation technologies into firefighter and incident commander training. It examines current practices across fire services and emergency response agencies, highlighting the capabilities of immersive and interactive platforms to enhance operational readiness, decision-making, situational awareness, and team coordination. This paper analyzes the benefits of these technologies, such as increased safety, cost-efficiency, data-driven performance assessment, and personalized learning pathways, while also identifying persistent challenges, including technological limitations, realism gaps, and cultural barriers to adoption. Emerging trends, such as AI-enhanced scenario generation, biometric feedback integration, and cloud-based collaborative environments, are discussed as future directions that may further revolutionize fire service education. This review aims to support researchers, training developers, and emergency service stakeholders in understanding the evolving landscape of digital training solutions, with the goal of fostering more resilient, adaptive, and effective emergency response systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Yang, Yingchia Liu, Haosen Xu, and Hao Tan. "AI-Driven UX/UI Design: Empirical Research and Applications in FinTech." International Journal of Innovative Research in Computer Science and Technology 12, no. 4 (2024): 99–109. http://dx.doi.org/10.55524/ijircst.2024.12.4.16.

Full text
Abstract:
This study explores the transformative impact of AI-driven UX/UI design in the FinTech sector, examining current practices, user preferences, and emerging trends. Through a mixed-methods approach, including surveys, interviews, and case studies, the research reveals significant adoption of AI technologies in FinTech UX/UI design, with 78% of surveyed companies implementing such solutions. Personalization emerges as a dominant trend, with 76% of FinTech apps utilizing AI for tailored user interfaces. The study demonstrates a strong correlation between AI-enhanced features and improved user engagement, with apps incorporating advanced AI features showing a 41% increase in daily active users. Ethical considerations, including data privacy and algorithmic bias, are addressed as critical challenges in AI implementation. The research contributes a conceptual framework for AI-driven UX/UI design in FinTech, synthesizing findings from diverse data sources. Future trends, including emotional AI and augmented reality integration, are explored. The study concludes that while AI-driven UX/UI design offers significant potential for enhancing user experiences in FinTech, balancing innovation with ethical considerations is crucial for responsible implementation and user trust.
APA, Harvard, Vancouver, ISO, and other styles
8

Imran, Muhammad, Norah Almusharraf, Saim Ahmed, and Muhammad Ismail Mansoor. "Personalization of E-Learning: Future Trends, Opportunities, and Challenges." International Journal of Interactive Mobile Technologies (iJIM) 18, no. 10 (2024): 4–18. http://dx.doi.org/10.3991/ijim.v18i10.47053.

Full text
Abstract:
Electronic learning (e-learning) has become one of the most influential trends in providing enhanced and easily accessible learning to users. However, in contrast to traditional e-learning systems that offer uniform content to all learners, personalized learning systems tailor educational materials and assessments to individual learners, ensuring a customized learning experience. Personalizing e-learning through artificial intelligence (AI) is beneficial in today’s diverse environment, where beginners can tailor the learning experience to suit their preferences and requirements. This paper focuses on the personalization of e-learning, exploring future trends, opportunities, and challenges through an in-depth survey of recent literature on the personalization of e-learning technologies such as extended reality (XR), virtual reality (VR), augmented reality (AR), and mixed reality (MR). Findings identify and propose that such tools should be included in curricula that support e-learning through personalized, adaptive, and delivery models.
APA, Harvard, Vancouver, ISO, and other styles
9

Aguayo, Claudio. "Mixed Reality (XR) research and practice." Pacific Journal of Technology Enhanced Learning 3, no. 1 (2021): 41–42. http://dx.doi.org/10.24135/pjtel.v3i1.104.

Full text
Abstract:
Up until recently, learning affordances (possibilities) offered by immersive digital technology in education, such as augmented reality (AR) and virtual reality (VR), were addressed and considered in isolation in educational practice. In the past five to ten years this has shifted towards a focus on integrating digital affordances around particular learning contexts and/or settings, creating a mixed reality (MR) ‘continuum’ of digital experiences based on the combination of different technologies, tools, platforms and affordances. This idea of a ‘digital continuum’ was first proposed during the mid 1990s by Milgram and Kishino (1994), conceptualised as an immersive continuum going from the real environment (RE) end, where no digital immersion exists in the real world, all the way to the fully digitally immersive VR end, where digital immersion is at its full.&#x0D; &#x0D; Recent literature expands the original digital continuum view – rooted in Milgram and Kishino (1994), to now consider MR environments extending to a multi-variety of sensorial dimensions, technological tools and networked intelligent platforms, and embodied user engagement modes, creating interconnected learning ecosystems and modes of perception (see for example Mann et al., 2018; and Speicher, Hall &amp; Nebeling, 2019). This new approach to MR is referred to as XR, where the X generally stands for ‘extended reality’ (referring to all the points along the MR continuum and beyond), or for ‘anything reality’ (accounting for the range of existing immersive technologies and denoting the imminently yet-to-come new digital affordances). XR as a multi-dimensional immersive learning environment can be approached and understood as a dynamic and culturally-responsive ‘medium’, offering targeted, flexible and adaptable user experiences coming from user-centric learning design strategies and pedagogy (Aguayo, Eames &amp; Cochrane, 2020).&#x0D; &#x0D; Today, XR as an emergent learning approach in education invites us to re-conceptualise technology-enhanced learning from a completely different epistemological stand. We have moved from focusing on the individual and isolated use of immersive digital technology like AR and VR as ‘learning tools’ that can enhance and augment learning experiences and outcomes in education; to now going beyond hardware and software and consider perception, cognition, aesthetics, emotions, haptics, embodiment, contexts (space), situations (time), and culture, among others, as critical components of a purposefully designed XR learning ecosystem (Aguayo et al., 2020; Liu et al., 2017; Maas &amp; Hughes, 2020). Imagine the educational possibilities when artificial intelligence (AI) learning algorithms connected to internet of things (IoT) devices come into play with XR in education (Cowling &amp; Birt, 2020; Davies, 2021).&#x0D; &#x0D; The challenge remains in knowing how to ground such epistemological and technological innovation into authentic, contextual, and tangible practice, while facilitating the balancing with non-technology mediated lived experiences in the real world (i.e. real reality (RR), Aguayo, 2017). Here, a set of XR research and practice case studies from Auckland University of Technology’s AppLab are presented to showcase and discuss how XR as a new paradigm is leading the exploration of digital innovation in education.
APA, Harvard, Vancouver, ISO, and other styles
10

Hassan, Normaziana, Basitah Taif, Rosita Mohd Tajuddin, and Shahrulnizam Hassan. "THE ROLE OF VIRTUAL INFLUENCERS IN SHAPING FASHION PREFERENCES AMONG MALAYSIAN GENERATION ALPHA: EXPLORING PERCEPTIONS, ENGAGEMENT AND CONSUMER TRUST THROUGH AI-DRIVEN CREATIVE LEARNING SPACES." International Journal of Education, Psychology and Counseling 10, no. 57 (2025): 765–83. https://doi.org/10.35631/ijepc.1057050.

Full text
Abstract:
This study investigates the dual impact of virtual influencers and AI-driven creative learning spaces on fashion preferences and ethical consumer behavior among Malaysian Generation Alpha (ages 6–13). Through a mixed-methods approach combining surveys (n=400) and experimental testing in AI-enhanced environments (n=100), the research quantifies engagement, trust, and critical thinking dynamics. Findings reveal that virtual influencers incorporating local cultural elements (e.g., Malay batik, modest Islamic fashion) achieve 75% higher engagement and 60% greater trust than generic personas. Urban-rural divides significantly moderate preferences, with urban youth favoring culturally aligned influencers and rural audiences preferring relatable, down-to-earth personas. AI-driven learning spaces, such as augmented reality (AR) classrooms and gamified modules, enhance critical thinking by 60%, enabling 75% of participants to identify sponsored content and reducing impulsive purchases by 40%. Demographic factors like gender and socioeconomic status further shape trust, with girls and higher-income families exhibiting stronger engagement. The study underscores the need for culturally tailored marketing and AI-integrated education to foster ethical decision-making. Recommendations include mandatory transparency disclosures for virtual influencers and infrastructure investments to bridge rural-urban digital divides.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "AI-Enhanced Mixed Reality"

1

Holstein, Kenneth, Bruce M. McLaren, and Vincent Aleven. "Student Learning Benefits of a Mixed-Reality Teacher Awareness Tool in AI-Enhanced Classrooms." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93843-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Babu, C. V. Suresh, R. Logambigai, R. Senthamilan, Franics A. Jeswin, and Shajith N. Mohamed. "Quantum AI-Enhanced Virtual Reality." In Advances in Marketing, Customer Relationship Management, and E-Services. IGI Global, 2024. https://doi.org/10.4018/979-8-3693-7673-7.ch012.

Full text
Abstract:
This study explores the impact of Quantum AI and Virtual Reality (VR) on neuromarketing practices, aiming to understand their influence on consumer behavior and enhance marketing strategies. Utilizing a mixed-methods approach, the research incorporates qualitative analysis of case studies and quantitative assessments of consumer responses to VR-enhanced marketing campaigns. Key findings indicate that integrating Quantum AI and VR can significantly improve consumer engagement and satisfaction by creating immersive experiences tailored to individual preferences. The study concludes that these technologies not only offer innovative methods for marketers but also raise ethical considerations regarding consumer privacy and data security. The implications underscore the need for a structured framework to guide the ethical implementation of these technologies in marketing. This research contributes valuable insights into the future of neuromarketing and prepares marketers for emerging trends in the digital landscape.
APA, Harvard, Vancouver, ISO, and other styles
3

Safak, Ilgin, Fatih Alagöz, and Emin Anarım. "Security and Privacy Mechanisms for 6G Internet of Everything Networks in Banking." In Encyclopedia of Information Science and Technology, Sixth Edition. IGI Global, 2024. http://dx.doi.org/10.4018/978-1-6684-7366-5.ch076.

Full text
Abstract:
6G, as a platform for the internet of everything, supports high data rates and low latency and satisfies the requirements for services, massive data traffic, storage, and processing, thus providing new opportunities for accessing consumer goods and digital services. Due to its enhanced autonomy, accuracy, and predictive capabilities, artificial intelligence (AI) is anticipated to play a significant role in the evolution of 6G by enabling large-scale deployments of self-optimized and automated systems, and enhancing applications and services, including augmented/virtual/mixed reality, Industry 5.0, banking, and financial services. 6G and AI can potentially revolutionize the banking and financial industry despite cost, scalability, security, privacy, and adoption constraints. This chapter discusses security and privacy concerns in 6G and potential solutions, the relevance and impact of 6G technology to the banking and financial industry, solutions and recommendations for developing secure 6G banking and financial systems, and future research directions.
APA, Harvard, Vancouver, ISO, and other styles
4

Oyedokun, Tunde Toyese. "Digitally Enhanced World of Realities." In Advances in Business Strategy and Competitive Advantage. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-9571-4.ch002.

Full text
Abstract:
The digital age has seen an unparalleled evolution in technology, particularly with immersive technologies like augmented reality (AR), virtual reality (VR), and mixed reality (MR), leading us toward a metaverse-defined future. This chapter explores their transformative impact on various industries as we stand on the brink of a convergence between digital and physical realms. Tracing the historical trajectory and pivotal milestones of AR and VR development, it examines their current state and convergence with domains like AI and IoT. Examining their applications across sectors such as healthcare, education, and retail, the chapter unveils innovative business models emerging within the metaverse. From redefining consumer behavior in e-commerce to revolutionizing traditional learning methods, immersive technologies are reshaping industries and necessitate understanding for individuals, businesses, and policymakers navigating this transformative landscape.
APA, Harvard, Vancouver, ISO, and other styles
5

Habib, Maki K. "Robotics E-Learning Supported by Collaborative and Distributed Intelligent Environments." In Revolutionizing Education in the Age of AI and Machine Learning. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-7793-5.ch005.

Full text
Abstract:
Robotics e-learning environment that facilitates tailored learning for individual students studying robotics is developed. The developed collaborative and distributed intelligent environment (CoDIE) enables multi-users to access simultaneously remote and integrated mixed reality facilities through the web. The developed system constitutes a robotic center to help in transferring theoretical knowledge enhanced by simulation and practical experience. It enables realistic interaction by immersing users in a shared 3D CoDIE. The system enables users to do programming, simulations, experiments, manipulating data and objects, diagnostics and analyses, control and monitor actions. Also, users can receive feedback from the system or instructors. The developed system has been implemented and tested using two real manipulators and virtual robots supporting real-time tracking and simulation. Three modes of operations have been implemented, individual robot training mode through virtual robot models, multi-user mode working together, and individual or group-based training by instructor.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "AI-Enhanced Mixed Reality"

1

Cagiltay, Bilgehan, Fatih Oztank, and Selim Balcisoy. "The Case for Audio-First Mixed Reality: An AI-Enhanced Framework." In 2025 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR). IEEE, 2025. https://doi.org/10.1109/aixvr63409.2025.00032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lei, Zhenhong, and Xinjun Li. "AI-Enhanced Mixed Reality Interventions with Haptic Feedback for Hand Dexterity Rehabilitation in Geriatric Movement Disorders." In 2024 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2024. https://doi.org/10.1109/ismar-adjunct64951.2024.00114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Ching-yu, Liuchuan Yu, Lap-fai Yu, and Behzad Esmaeili. "Visual Allocation of Teams In The Construction Industry: Team Situation Awareness Under Information Overload In Human-AI Collaboration." In 16th International Conference on Applied Human Factors and Ergonomics (AHFE 2025). AHFE International, 2025. https://doi.org/10.54941/ahfe1006369.

Full text
Abstract:
The integration of AI has significant opportunities for enhancing human-machine collaboration, particularly in dynamic environments like the construction industry, where excessive information affects decision-making and coordination. This study investigates how visual attention distribution relates to SA development under information overload by addressing three research questions: (1) How does visual allocation relate to individual SA under information overload? (2) How does visual allocation influence TSA formation? (3) Do high-TSA teams exhibit different visual allocation patterns compared to low-TSA teams? To answer these questions, a multi-sensor virtual reality (VR) construction environment is created as testbed that includes realistic task simulations involving both human teammates and AI-powered cobots (e.g., drones and robotic dog). Participants completed a pipe installation task when navigating construction hazards like falls, trips, and collisions, while experiencing varying degrees of information overload. Team situation awareness (TSA)—the shared understanding of tasks and environmental conditions—was assessed using the situation awareness global assessment technique (SAGAT) and eye movements were tracked using Meta Quest Pro. The relationship between eye-tracking metrics and SA/TSA scores is analyzed using linear mixed-effects models (LMMs) and a two-sample t-test compared visual allocation patterns between high- and low-TSA teams. Results indicate that eye tracking metrics can predict SA’s levels, an individual’s SA may also be enhanced through dyadic communication with team members, allowing participants to acquire updates without directly seeing the changes. Furthermore, high TSA teams significantly allocated more attention to environment-related objects and exhibited a more balanced visual allocation pattern (run count and dwell time) on task- and environment-related objects. In contrast, low TSA teams were more task-focused, potentially reducing their awareness of broader situational risks. These findings helps to identify at-risk workers using their psychophysiological responses. This research contributes to developing safer and more effective human-AI collaboration in construction and other high-risk industries by prioritizing TSA and AI-driven personalized feedback.
APA, Harvard, Vancouver, ISO, and other styles
4

Biniwale, S. S., R. Lokesh, M. Elfeel, and S. K. Khataniar. "Metaverse for Subsurface - Augmented & Virtual Reality Immersive Collaboration and Robust Decision Making in Oil & Gas." In SPE Europe Energy Conference and Exhibition. SPE, 2025. https://doi.org/10.2118/225552-ms.

Full text
Abstract:
Abstract Objective This paper summarizes investigations into the transformative potential of immersive technologies such as Augmented, Virtual, and Mixed Reality —to redefine subsurface understanding and field development planning in the oil and gas industry. Integrating these technologies creates a cohesive digital experience where the subsurface world blends seamlessly with operational workflows, facilitating real-time collaboration and decision-making. This project uses 3D model insights and collaborative visualization to empower multidisciplinary teams to make rapid, informed decisions while optimizing costs, supporting remote operations, and enhancing resource management through improved visualization and interaction capabilities. Methodology The development of the immersive AR/VR collaboration platform is anchored in the Digital Innovation Project framework, combining Design Thinking and lean and Agile principles. Design Thinking is employed to identify subsurface collaboration challenges, while Lean Agile methodology focuses on creating prototypes and deployable solutions with a Minimum Viable Product (MVP). The technical implementation leverages a robust software stack, including Petrel for 3D reservoir modeling, Unity for interactive VR environments, and immersive device interfaces like Meta Quest 2 and Microsoft HoloLens 2. The platform supports real-time collaboration, dynamic data visualization, and avatar-based engagement across disciplines and geographies. Results The immersive collaboration platform was evaluated using subsurface scenarios such as waterflood optimization and well placement. It enabled geoscientists and engineers to interact with high-resolution 3D reservoir models in real-time, accelerating decision-making and improving alignment across disciplines. The platform demonstrated enhanced situational awareness, reduced project turnaround time, and improved engagement. Key technical features included time-based animation of streamlined simulations, VR room collaboration, and dynamic property visualization. Performance optimization techniques ensured smooth rendering and interaction for models up to 2 million cells. Novel Contribution This work introduces a novel approach to applying AR/VR/MR technologies to subsurface workflows, establishing a new paradigm for efficient and informed field development planning. The initiative bridges gaps in data understanding, team alignment, and resource planning in subsurface environments by integrating immersive technologies. The platform supports real-time, multi-user collaboration within a shared virtual 3D environment, fostering synchronous decision-making and reducing travel and meeting costs. As immersive technologies mature, their intersection with AI offers a pathway to the "subsurface metaverse," where data, people, and decisions seamlessly converge in a dynamic, virtual ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
5

Deshpande, Sourabh, Manish Raj Aryal, and Sam Anand. "Deep Learning-Based Recognition of Manufacturing Components Using Augmented Reality for Worker Training of Assembly Tasks." In ASME 2024 19th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/msec2024-125279.

Full text
Abstract:
Abstract Current smart manufacturing computational tools and optimization methods leverage advances from artificial intelligence (AI), augmented / mixed/virtual reality, and predictive modeling to meet factory production goals. With significant developments in computer graphics tools, deep learning methods, and immersive experiences (metaverse), there is a need to study its seamless adaptation to AR-based manufacturing-specific scenarios, especially for worker training. In this work, we develop an augmented reality-based worker training framework aimed at helping novice technicians with complex machine component identification in several smart factory tasks and shop-floor training environments. A Deep-Learning (DL) semantic segmentation network U-Net, prominently applied in several manufacturing use cases, is enhanced for detailed pixel-level component identification. We formulated an automatic synthetic image generation algorithm that uses computer-aided design (CAD) models generated using Siemens NX to create images and automatic labels for training the neural network. These images are custom-rendered with variable lighting and randomized orientations to mimic real-life occlusions and improve data diversity for neural network training. In this approach, arbitrary textured materials are assigned to the 3D objects through Blender, an open-source computer graphics software using Python programming in the input scene environment. The background for these images is set from the multiple real-world environment scenes. A custom Python script integrated into Blender captures the image snapshots through the built-in camera system. It automatically separates render and mask images, thus saving time on the tedious manual labeling process. We pass the virtual images (both render and mask images) to the DL pipeline for training. After training the data through Pytorch and CUDA parallel computing, the Open Neural Network Exchange (ONNX) format output model is imported to the AR app. The application is developed using the Unity 3D game engine platform and deployed on Android. Specifically, the Unity Barracuda toolkit maps the input and output tensors as rendering textures and recognizes components in colored pixels. We present a prototype three-a stage automotive engine assembly case study to evaluate the proposed framework’s effectiveness. The engine components include tools (mallet, chisel, wrench), machine tooling equipment (nuts, bolts, Allen keys, screwdriver), and assembly equipment (engine piston, cylinder head, and camshaft). The 3D-printed components mimicking engine parts are identified in real-time along their exact pixel-level contours in the AR app. This framework can extend AR-based methods to visualize components in complex wiring, cluttered environments, and intricate assemblies on the factory floor, reducing the training time performed through conventional approaches and providing an interactive training approach through gamification.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "AI-Enhanced Mixed Reality"

1

Pasupuleti, Murali Krishna. Next-Generation Extended Reality (XR): A Unified Framework for Integrating AR, VR, and AI-driven Immersive Technologies. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv325.

Full text
Abstract:
Abstract: Extended Reality (XR), encompassing Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR), is evolving into a transformative technology with applications in healthcare, education, industrial training, smart cities, and entertainment. This research presents a unified framework integrating AI-driven XR technologies with computer vision, deep learning, cloud computing, and 5G connectivity to enhance immersion, interactivity, and scalability. AI-powered neural rendering, real-time physics simulation, spatial computing, and gesture recognition enable more realistic and adaptive XR environments. Additionally, edge computing and federated learning enhance processing efficiency and privacy in decentralized XR applications, while blockchain and quantum-resistant cryptography secure transactions and digital assets in the metaverse. The study explores the role of AI-enhanced security, deepfake detection, and privacy-preserving AI techniques to mitigate risks associated with AI-driven XR. Case studies in healthcare, smart cities, industrial training, and gaming illustrate real-world applications and future research directions in neuromorphic computing, brain-computer interfaces (BCI), and ethical AI governance in immersive environments. This research lays the foundation for next-generation AI-integrated XR ecosystems, ensuring seamless, secure, and scalable digital experiences. Keywords: Extended Reality (XR), Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), Artificial Intelligence (AI), Neural Rendering, Spatial Computing, Deep Learning, 5G Networks, Cloud Computing, Edge Computing, Federated Learning, Blockchain, Cybersecurity, Brain-Computer Interfaces (BCI), Quantum Computing, Privacy-Preserving AI, Human-Computer Interaction, Metaverse.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!