To see the other types of publications on this topic, follow the link: Computer augmented environment.

Dissertations / Theses on the topic 'Computer augmented environment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Computer augmented environment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Lijiang. "Human-computer collaboration in video-augmented environment for 3D input." Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sandström, David. "Dynamic Occlusion of Virtual Objects in an 'Augmented Reality' Environment." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71581.

Full text
Abstract:
This thesis explores a way of increasing the perception of reality within an ''Augmented Reality'' application by making real objects able to obstruct the view of virtual objects. This mimics how real opaque objects occlude each other and thus making virtual objects behave the same way will improve the user experience of Augmented Reality users. The solution uses Unity as the engine with plugins for ARKit and OpenCV. ARKit provides the Augmented Reality experience and can detect real world flat surfaces on which virtual objects can be placed. OpenCV is used for image processing to detect real world objects which can then be translated into virtual silhouettes within Unity that can interact with, and occlude, the virtual objects. The end result is a system that can handle the occlusion in real time, while allowing both the real and virtual objects to translate and rotate within the scene while still maintaining the occlusion. The big drawback of the solution is that it requires a well defined environment without visual clutter and with even lighting to work as intended. This makes it unsuitable for outdoor usage.
APA, Harvard, Vancouver, ISO, and other styles
3

Beilstein, Del L. "Visual simulation of night vision goggles in a chromakeyed, augmented, virtual environment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FBeilstein.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments, and Simulation)--Naval Postgraduate School, June 2003.<br>Thesis advisor(s): Rudolph P. Darken, Joseph A. Sullivan. Includes bibliographical references (p. 77). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Holstensson, Erik, Ram Hamid, and Sarbast Jundi. "Evaluation of augmented reality in a manufacturing environment : A feasibility study." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15766.

Full text
Abstract:
Augmented Reality (AR) is a fast-emerging technology and it has been applied in many fields e.g. education, health, entertainment, gaming and tracking systems in logistics. AR technology combines the virtual world with the reality by superimposing digital information onto the physical world. This study evaluates the usability of the AR in industrial environment focusing on effectiveness, efficiency, and user acceptance in comparison to other instructional medium e.g. paper-based instructions or manuals. An AR prototype was developed to be used in the usability evaluation. To evaluate the AR application in the field of industry an experiment was conducted. To get the user experience and acceptance questionnaires and interviews were used involving real assembly workers where they used the AR prototype. The results of the study show that when using AR as assistance in the assembly assurance process, the number of faults and task completion time were reduced significantly compared to the traditional methods. Also, the users had a positive attitude and a high level of satisfaction when using AR.
APA, Harvard, Vancouver, ISO, and other styles
5

Lennerton, Mark J. "Exploring a chromakeyed augmented virtual environment for viability as an embedded training system for military helicopters." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLennerton.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2004.<br>Thesis advisor(s): Rudolph Darken, Joseph A. Sullivan. Includes bibliographical references (p. 103-104). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Hahn, Mark E. "Implementation and analysis of the Chromakey Augmented Virtual Environment (ChrAVE) version 3.0 and Virtual Environment Helicopter (VEHELO) version 2.0 in simulated helicopter training." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FHahn%5FMark.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, June 2005.<br>Thesis Advisor(s): Joseph A. Sullivan, Rudolph Darken. Includes bibliographical references (p. 113-115). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, William. "Surgical training with an augmented digital environment (SurgADE) an adaptable approach for teaching minimally invasive surgery techniques /." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hogler, Marcus. "Comparing head- and eye direction and accuracy during smooth pursuit in an augmented reality environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254999.

Full text
Abstract:
Smooth pursuit is the movement that occurs when the eyes meticulously follow an object in motion. While smooth pursuit can be achieved with a stationary head, it generally relies on the head following the visual target as well. During smooth pursuit, a coordinating vestibular mechanism, shared by both the head and the eyes, is used. Therefore, smooth pursuit can reveal much about where a person is looking based on only the direction of the head. To investigate the interplay between the eyes and the head, an application was made for the augmented reality head-mounted display Magic Leap. The application gathered data of the head and eyes respective movements. The data was analyzed using visualizations to find relationships within the eye-head coordination. User studies were conducted and the eyes proved to be incredibly accurate and the head direction was close to the target at all times. The results point towards the possibility of using head direction as a model for visual attention in the shape of a cone. The users’ head direction was a good indicator of where they put their attention, making it a valuable tool for developing augmented reality applications for head-mounted displays and smart glasses. By only using head direction, a software developer can measure where most of the users’ attention is put and hence optimize the application according to this information.<br>Följerörelser är det som sker när ögonen noggrant följer ett objekt i rörelse. Följerörelser kan uppnås med ett stationärt huvud, men generellt används även huvudet för att följa det visuella målet. Ögonen och huvudet delar en vestibulär koordineringsmekanism som är aktiv under följerörelser och därför kan enbart huvudrörelser avslöja mycket om var en person har sin uppmärksamhet.För att undersöka samspelet mellan ögonen och huvudet gjordes en applikation för ett augmented reality headsetet Magic Leap. Applikationen samlade in data på ögonrespektive huvudrörelser. Den insamlade datan analyserades med hjälp av visualiseringar för att hitta förhållanden inom ögon-huvud koordinationen.Användarstudier utfördes och ögonen visade sig vara väldigt exakta och huvudets riktning var hela tiden i närheten av målet. Resultatet pekar mot möjligheten att använda huvudriktning som en modell för visuell uppmärksamhet i formen av en kon. Användarnas huvudriktning var en bra indikator på var de hade sin uppmärksamhet, vilket gör den till ett användbart verktyg för utveckling av augmented reality applikationer för headsets och smartglasögon. En mjukvaruutvecklare kan mäta var användarnas uppmärksamhet dras genom att använda huvudriktningen och kan därmed optimera applikationen utefter den informationen.
APA, Harvard, Vancouver, ISO, and other styles
9

Reynal, Maxime. "Non-visual interaction concepts : considering hearing, haptics and kinesthetics for an augmented remote tower environment." Thesis, Toulouse, ISAE, 2019. http://www.theses.fr/2019ESAE0034.

Full text
Abstract:
Afin de simplifier la gestion des ressources humaines et de réduire les coûts d’exploitation, certaines tours de contrôle sont désormais conçues pour ne pas être implantées directement sur l’aéroport. Ce concept, connu sous le nom de tour de contrôle distante (remote tower), offre un contexte de travail “digital” : la vue sur les pistes est diffusée via des caméras situées sur le terrain distant. Ce concept pourrait également être étendu au contrôle simultanés de plusieurs aéroports à partir d’une seule salle de contrôle, par un contrôleur seul (tour de contrôle distante multiple). Ces notions nouvelles offrent aux concepteurs la possibilité de développer des formes d’interaction novatrices. Cependant, la plupart des augmentations actuelles reposent sur la vue, qui est largement utilisée et, par conséquent, parfois surchargée.Nous nous sommes ainsi concentrés sur la conception et l’évaluation de nouvelles techniques d’interaction faisant appel aux sens non visuels, plus particulièrement l’ouïe, le toucher et la proprioception. Deux campagnes expérimentales ont été menées. Durant les processus de conception, nous avons identifié, avec l’aide d’experts du domaine, certaines situations pertinentes pour les contrôleurs aériens en raison de leur criticité: a) la mauvaise visibilité (brouillard épais,perte de signal vidéo), b) les mouvements non autorisés au sol (lorsque les pilotes déplacent leur appareil sans y avoir été préalablement autorisés), c) l’incursion de piste (lorsqu’un avion traverse le point d’attente afin d’entrer sur la piste alors qu’un autre, simultanément, s’apprête à atterrir) et d) le cas des communications radio simultanées provenant de plusieurs aéroports distants. La première campagne expérimentale visait à quantifier la contribution d’une technique d’interaction basée sur le son spatial, l’interaction kinesthésique et des stimuli vibrotactiles, afin de proposer une solution au cas de perte de visibilité sur le terrain contrôlé. L’objectif était d’améliorer la perception de contrôleurs et d’accroître le niveau général de sécurité, en leur offrant un moyen différent pour localiser les appareils. 22 contrôleurs ont été impliqués dans une tâche de laboratoire en environnement simulé. Des résultats objectifs et subjectifs ont montré une précision significativement plus élevée en cas de visibilité dégradée lorsque la modalité d’interaction testée était activée. Parallèlement, les temps de réponse étaient significativement plus longs relativement courts par rapport à la temporalité de la tâche. L’objectif de la seconde campagne expérimentale, quant à elle, était d’évaluer 3 autres modalités d’interaction visant à proposer des solutions à 3 autres situations critiques : les mouvements non autorisés au sol,les incursions de piste et les appels provenant d’un aéroport secondaire contrôlé. Le son spatial interactif, la stimulation tactile et les mouvements du corps ont été pris en compte pour la conception de 3 autres techniques interactives. 16contrôleurs aériens ont participé à une expérience écologique dans laquelle ils ont contrôlé 1 ou 2 aéroport(s), avec ou sans augmentation. Les résultats comportementaux ont montré une augmentation significative de la performance globale des participants lorsque les modalités d’augmentation étaient activées pour un seul aéroport. La première campagne a été la première étape dans le développement d’une nouvelle technique d’interaction qui utilise le son interactif comme moyen de localisation lorsque la vue seule ne suffit pas. Ces deux campagnes ont constitué les premières étapes de la prise en compte des augmentations multimodales non visuelles dans les contextes des tours de contrôles déportées Simples et Multiples<br>In an effort to simplify human resource management and reduce operational costs, control towers are now increasingly designed to not be implanted directly on the airport but remotely. This concept, known as remote tower, offers a “digital”working context: the view on the runways is broadcast remotely using cameras located on site. Furthermore, this concept could be enhanced to the control of several airports simultaneously from one remote tower facility, by only one air traffic controller (multiple remote tower). These concepts offer designers the possibility to develop novel interaction forms. However, the most part of the current augmentations rely on sight, which is largely used and, therefore, is sometimes becoming overloaded. In this Ph.D. work, the design and the evaluation of new interaction techniques that rely onnon-visual human senses have been considered (e.g. hearing, touch and proprioception). Two experimental campaigns have been led to address specific use cases. These use cases have been identified during the design process by involving experts from the field, appearing relevant to controllers due to the criticality of the situation they define. These situations are a) poor visibility (heavy fog conditions, loss of video signal in remote context), b) unauthorized movements on ground (when pilots move their aircraft without having been previously cleared), c) runway incursion (which occurs when an aircraft crosses the holding point to enter the runway while another one is about to land), and d) how to deal with multiple calls associated to distinct radio frequencies coming from multiple airports. The first experimental campaign aimed at quantifying the contribution of a multimodal interaction technique based on spatial sound, kinaesthetic interaction and vibrotactile feedback to address the first use case of poor visibility conditions. The purpose was to enhance controllers’ perception and increase overall level of safety, by providing them a novel way to locate aircraft when they are deprived of their sight. 22 controllers have been involved in a laboratory task within a simulated environment.Objective and subjective results showed significantly higher performance in poor visibility using interactives patial sound coupled with vibrotactile feedback, which gave the participants notably higher accuracy in degraded visibility.Meanwhile, response times were significantly longer while remaining acceptably short considering the temporal aspect of the task. The goal of the second experimental campaign was to evaluate 3 other interaction modalities and feedback addressing 3 other critical situations, namely unauthorized movements on ground, runway incursion and calls from a secondary airport. We considered interactive spatial sound, tactile stimulation and body movements to design3 different interaction techniques and feedback. 16 controllers’ participated in an ecological experiment in which they were asked to control 1 or 2 airport(s) (Single Vs. Multiple operations), with augmentations activated or not. Having no neat results regarding the interaction modalities into multiple remote tower operations, behavioural results shown asignificant increase in overall participants’ performance when augmentation modalities were activated in single remotecontrol tower operations. The first campaign was the initial step in the development of a novel interaction technique that uses sound as a precise means of location. These two campaigns constituted the first steps for considering non-visual multimodal augmentations into remote tower operations
APA, Harvard, Vancouver, ISO, and other styles
10

Gheisari, Masoud. "An ambient intelligent environment for accessing building information in facility management operations; A healthcare facility scenario." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52967.

Full text
Abstract:
The Architecture, Engineering, Construction, and Operations (AECO) industry is constantly searching for new methods for increasing efficiency and productivity. Facility managers, as a part of the owner/operator role, work in complex and dynamic environments where critical decisions are constantly made. This decision-making process and its consequent performance can be improved by enhancing Situation Awareness (SA) of the facility managers through new digital technologies. SA, as a user-centered approach for understanding facility managers’ information requirement, together with Mobile Augmented Reality (MAR) was used for developing an Ambient Intelligent (AmI) environment for accessing building information in facilities. Augmented Reality has been considered as a viable option to reduce inefficiencies of data overload by providing facility managers with an SA-based tool for visualizing their “real-world” environment with added interactive data. Moreover, Building Information Modeling (BIM) was used as the data repository of the required building information. A pilot study was done to study the integration between SA, MAR, and BIM. InfoSPOT (Information Surveyed Point for Observation and Tracking) was developed as a low-cost solution that leverage current AR technology, showing that it is possible to take an idealized BIM model and integrate its data and 3D information in an MAR environment. A within-subjects user participation experiment and analysis was also conducted to evaluate the usability of the InfoSPOT in facility management related practices. The outcome of statistical analysis (a one-way repeated measure ANOVA) revealed that on average the mobile AR-based environment was relatively seamless and efficient for all participants in the study. Building on the InfoSPOT pilot study, an in-depth research was conducted in the area of healthcare facility management, integrating SA, MAR, and BIM to develop an AmI environment where facility mangers’ information requirement would be superimposed on their real-word view of the facility they maintain and would be interactively accessible through current mobile handheld technology. This AmI environment was compared to the traditional approach of conducting preventive and corrective maintenance using paper-based forms. The purpose of this part of the research was to investigate the hypothesis of “bringing 3D BIM models of building components in an AR environment and making it accessible through handheld mobile devices would help the facility managers to locate those components easier and faster compared to facility managers’ paper-based approach”. The result of this study shows that this innovative application of AR and integrating it with BIM to enhance the SA has the potential to improve construction practices, and in this case, facility management.
APA, Harvard, Vancouver, ISO, and other styles
11

Granlund, Linnéa. "Did you notice that? A comparison between auditory and vibrotactile feedback in an AR environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254996.

Full text
Abstract:
There are different ways to interact with different hardware, therefore it is important to have an understanding about what factors that affect the experience when designing interactions and interfaces. This study focuses on exploring how auditory and vibrotactile feedback are perceived by the users when they interact in a virtual AR environment. An application was developed to the AR glasses Magic Leap with different interactions, both passive and active. An experimental study was conducted with 28 participants that got to interact in this virtual environment. The study included two parts. First the participants interacted in the virtual environment where they did a think aloud. Thereafter they were interviewed. There were a total of three test cases. One with only auditory feedback, one with vibrotactile feedback, and a third that had both auditory and vibrotactile feedback. Seven of the 28 participants acted as a control group that did not have any feedback to their interactions. The study shows that using only vibrotactile feedback creates different impressions depending on earlier experiences with the same AR environment. Using only auditory feedback created an atmosphere that were close to reality. Having both feedbacks active at the same time reduced the noticed feedback and some interactions were here not even noticed at all. Passive interactions were more noticed than active interactions in all cases.<br>Det finns flera olika sätt att interagera med olika hårdvaror och därför är det viktigt att ha en förståelse kring vilka faktorer som påverkar upplevelsen när man designar för diverse gränssnitt och interaktioner. Den här studien fokuserar på att utforska hur auditiv och vibrationsåterkoppling uppfattas av användaren när de interagerar i en virtuell AR-miljö. En applikation var utvecklad till AR-glasögonen Magic Leap One med olika aktiva och passiva interaktioner.En experimentell studie genomfördes med 28 deltagare som fick interagera i en virtuell miljö. Studien bestod av av två delar. Först fick deltagarna interagera i en virtuell miljö där de gjorde en think aloud. Efter detta blev de intervjuade. Det var totalt tre testfall, ett hade endast auditiv återkoppling, ett hade vibrationsåterkoppling och det sista hade både auditiv och vibrationsåterkoppling. Sju av de 28 deltagarna agerade kontrollgrupp och de hade ingen återkoppling på deras interaktioner.Studien visade att bara använda vibrationsåterkoppling skapade olika intryck beroende på de tidigare erfarenheterna i samma AR-miljö. Att endast använda auditiv återkoppling skapade en atmosfär som vara nära verkligheten. Att ha båda återkopplingarna aktiva samtidigt reducerade den totala märkta återkopplingen och några interaktioner hade inte någon person som noterade någon av dem. Passiva interaktioner var mer uppmärksammade än aktiva interaktioner i alla testfallen.
APA, Harvard, Vancouver, ISO, and other styles
12

Martinsson, Anton. "Intrusiveness VS Awareness: Laying The Groundwork For Presenting Offers To Customers With AR In A Retail Environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254992.

Full text
Abstract:
The term Augmented Reality (AR) was first coined back in 1968. Research on the subject would then for decades remain largely focused on technical aspects of the phenomenon. At the time, little to no attention was paid to the potential user audience or what would later be known as Human-Computer Interaction theory. Some previous studies have touched upon user satisfaction of general AR interfaces, but most studies that cover the topic of indoor navigation with AR tend to focus on technical solutions. Few try to establish any kind of visual language or research what visual interfaces are most intuitive, effective and user friendly. Consequently, this thesis investigates how to visually seek the attention of the user to present offers in an AR application for smartphones meant to be used to navigate an indoor retail environment. It does so by conducting a user study in a real retail store in Stockholm, Sweden, where participants completed three laps around a certain part of the store using an AR indoor navigation application. For every lap, each participant tried out one of three different versions of the application. These three versions varied in how intrusive the presentation of offers was to the customer’s experience with the application. The participants filled in a Likert scale questionnaire for each of the three versions, as well as answered some more open-ended questions at the end of every test session. The conclusion is that a balanced approach to intrusiveness is the wisest in order to make customers aware of discounts around them while not considerably annoying them. The most positively received approach presented an offer promptly to the user, but did not take up too much screen space or force the user to take any action towards it. Future studies could investigate whether there is a higher tolerance for visual intrusion among customers if the discount is considered big or very personally relevant. Subsequent studies could also use high-end AR head-mounted displays that might be more prominently used by everyday consumers in the future.<br>Termen Augmented Reality (AR) myntades år 1968. Vetenskap och forskning kring ämnet kom sedan i decennier att fokuseras på fenomenets tekniska aspekter. Till en början lades nästintill ingen uppmärksamhet på den potentiella användargruppen eller på det som senare kom att kallas Människa-Dator Interaktions-teori. Under senare år har en ökad mängd studier berört just denna aspekt, där användarens tillfredsställelse med allmänna AR-gränssnitt satts i fokus. För studier fokuserade på inomhusnavigering med hjälp av AR har dock inte utvecklingen varit fullständigt i linje med ovanstående, då det är få som i dag undersökt vilken typ av visuella gränssnitt som är mest intuitiva, effektiva och användarvänliga. Studierna har istället fokuseras på den tekniska aspekten av ämnet. Denna avhandling undersöker därav hur användarens uppmärksamhet bör sökas för att visuellt presentera erbjudanden i en AR-applikation anpassad för smarta telefoner vid navigation i detaljhandelsmiljö inomhus. För att undersöka detta genomfördes en användarstudie i en matbutik i Stockholm. Deltagare i studien fick vid tre direkt efterföljande tillfällen gå en bestämd rutt genom matbutiken där tre olika visuella presentationer av erbjudanden visades i en AR-baserad navigationsapp. De olika versionerna av presentationer var mer eller mindre visuellt påträngande på deltagarnas upplevelse. Resultatet av deras upplevelse kom senare att utvärderas med hjälp av Likertskalor som deltagarna fyllde i efter vardera version som testades, samt öppna frågor i slutet av varje användartest. Slutsatsen är att ett balanserat angreppssätt är det mest effektiva för att göra den potentiella kunden medveten om rabatter och andra erbjudanden inuti butiksmiljön. Mest positiv inställning hade deltagarna då erbjudanden visades på ett enkelt och avskalat vis, där presentationen varken upptog mycket skärutrymme eller krävde att användaren vidtog några åtgärder. Vidare skulle framtida studier kunna undersöka huruvida det finns en högre tolerans gentemot erbjudanden som presenteras på detta eller andra sätt om de anses ha en hög personlig relevans. Efterföljande studier kommer även kunna använda ny AR-teknik och se hur den kommer kunna användas mot framtida konsumenter.
APA, Harvard, Vancouver, ISO, and other styles
13

Stafford-Fraser, James Quentin. "Video-augmented environments." Thesis, University of Cambridge, 1996. https://www.repository.cam.ac.uk/handle/1810/272415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Durchon, Hugo. "Deep learning methods for object pose estimation and reconstruction in industrial environments." Electronic Thesis or Diss., Institut polytechnique de Paris, 2025. http://www.theses.fr/2025IPPAS003.

Full text
Abstract:
Cette thèse aborde les défis relatifs à la mise en œuvre du placement et de l'alignement du contenu virtuel sans marqueurs pour la Réalité Augmentée (RA) dans des environnements industriels d'assemblage de chaudières. Malgré les potentiels bénéfices de la RA pour l'industrie, incluant une production accrue ainsi qu'une formation interactive et plus efficiente des opérateurs, son utilisation reste limitée dans les environnements de production complexes. Une des limitations technologiques réside dans l'incapacité des approches traditionnelles d'alignement du contenu virtuel, tirant parti de marqueurs physiques, à opérer sur des lignes de production entières, notamment, lorsqu'elles sont composées d'objets d'intérêt ou de structures d'assemblage mobiles. À travers l'investigation de diverses approches de vision par ordinateur, notre travail aboutit au développement d'une pipeline innovante combinant les Neural Radiance Fields et les méthodes d'estimation de position 6D dites « zero-shot ». Cette solution permet l'alignement du contenu virtuel sans marqueurs et sans nécessiter de données d'entraînement labellisées ni de modèles CAD préexistants. L'approche proposée démontre des performances robustes sur différents modèles de chaudières et étapes d'assemblage. Nos contributions vont au-delà de la mise en œuvre technique pour inclure des idées concernant le déploiement pratique de solutions de vision par ordinateur dans des environnements industriels, ouvrant la voie à une adoption plus généralisée de la RA dans le contexte de l'industrie 4.0<br>This thesis addresses the challenge of implementing markerless Augmented Reality (AR) in manufacturing environments, focusing on boiler assembly lines. While AR technology shows great promise for improving industrial efficiency through operator training and assistance, its adoption has been limited by difficulties in handling complex production environments. Traditional approaches using markers become impractical when scaling across entire production lines, especially with dynamic objects and assembly structures.Through systematic investigation of various computer vision approaches, from lightweight neural networks to advanced 3D reconstruction techniques, our work culminates in an innovative end-to-end pipeline that combines neural rendering and zero-shot pose estimation techniques. This solution enables markerless AR without requiring extensive manual annotations or pre-existing 3D models, addressing key barriers to industrial AR adoption.The proposed approach demonstrates robust performance across different boiler models and assembly stages, achieving remarkable results in real production conditions. Our contributions extend beyond technical implementation to include valuable insights for deploying computer vision solutions in industrial settings, paving the way for more widespread adoption of AR technology in Industry 4.0 manufacturing environments
APA, Harvard, Vancouver, ISO, and other styles
15

Antone, Matthew E. "Synthesis of navigable 3-D environments from human-augmented image data." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10265.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.<br>Includes bibliographical references (p. 95-96).<br>by Matthew E. Antone.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Šturcová, Zdenka. "The use of computer vision techniques to augment home based sensorised environments." Thesis, University of Ulster, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558785.

Full text
Abstract:
Sensorised environments offer opportunities in the support of our everyday lives, in particular, towards realising the concepts of 'Ageing in place'. Such environments are capable of allowing occupants to live independently by providing remote monitoring services and by supporting the completion of activities of daily living. This research focuses on augmenting sensorised environments and promoting improved health- care services with video based solutions. The aim was to demonstrate that video based solutions are feasible and have wide usability and potential in health care, elderly care and generally within sensorised environments. This aim was addressed by considering a number of research objectives, which have been investigated and presented as a series of studies within this thesis. Specifically, the first study targeted multiple occupancy within sensorised environments where a solution based on tracking persons through the use of video was proposed. The results show that multiple occupancy can be handled using video and that users can be successfully tracked within an environment. The second study used video to investigate repetitive behaviour patterns in persons with dementia. The experiment showed that the repetitive behaviour can be extracted and successfully analysed using a single camera. Thirdly, a target group of Parkinson's disease patients are considered with whom video analysis is used to build an automated diary describing their changing status over the day. Results showed that the changes in the patient's movement abilities can be revealed from a video. The final study investigated a specific type of movement disorder known as a tremor. A method involving frequency analysis of tremor from video data was validated in a clinical study involving 31 participants. Furthermore, this study resulted in the development of an open-source software application for routine tremor assessment. This thesis offers a contribution to knowledge by demonstrating that video can be used to further augment sensorised environments to support non-invasive remote monitoring and assessment.
APA, Harvard, Vancouver, ISO, and other styles
17

Sjöberg, Jesper. "Making use of the environmental space in augmented reality." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-156664.

Full text
Abstract:
Augmented reality (AR) is constantly moving forward and pushing its bound- aries forward. New applications and frameworks for mobile devices are rapidly developing. Head mounted displays are evolving and making an impact on in- dustries and people. In this thesis, we are going to evaluate the concept of how to make use of the environmental space in augmented reality. Within the environmental space, we are going to focus on secondary elements — elements and objects that not are in the focus of the users. Both augmented reality in smartphones and head-mounted displays are going to be considered. Through an evaluation conducted with four participants during a week, we are going to find use cases and scenarios where this type of concept could be used and where it can be applied. The results of this thesis shows where and how the can be use for a concept such as this.
APA, Harvard, Vancouver, ISO, and other styles
18

Robertson, Cindy Marie. "Using Graphical Context to Reduce the Effects of Registration Error in Augmented Reality." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19775.

Full text
Abstract:
An ongoing research focus in Augmented Reality (AR) is to improve tracking and display technology in order to minimize registration errors between the graphical display and the physical world. However, registration is not always necessary for users to understand the intent of an augmentation, especially in situations where the user and the system have shared semantic knowledge of the environment. I hypothesize that adding appropriate graphical context to an augmentation can ameliorate the effects of registration errors. I establish a theoretical basis supporting the use of context based on perceptual and cognitive psychology. I introduce the notion of Adaptive Intent-Based Augmented Reality (i.e. augmented reality systems that adapt their augmentations to convey the correct intent in a scene based on an estimate of the registration error in the system.) I extend the idea of communicative intent, developed for desktop graphical explanation systems by Seligmann and Feiner (Seligmann &Feiner, 1991), to include graphical context cues, and use this as the basis for the design of a series of example augmentations demonstrating the concept. I show how semantic knowledge of a scene and the intent of an augmentation can be used to generate appropriate graphical context that counters the effects of registration error. I evaluate my hypothesis in two user studies based on a Lego block-placement task. In both studies, a virtual block rendered on a head-worn display shows where to place the next physical block. In the first study, I demonstrate that a user can perform the task effectively in the presence of registration error when graphical context is included. In the second, I demonstrate that a variety of approaches to displaying graphics outside the task space are possible when sufficient graphical context is added.
APA, Harvard, Vancouver, ISO, and other styles
19

James, Raphaël. "Augmented and Physical reality environments used in collaborative visual analysis." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG011.

Full text
Abstract:
La quantité de données produite chaque jour est en croissance constante. Cette masse d’information demande de trouver des solutions pour les analyser et les visualiser. Les murs d’écrans sont de bon candidat pour résoudre ce problème, car ils permettent d’afficher une grande quantité de données. Cependant, ce sont des écrans partagés, conçus pour plusieurs personnes, ce qui ne permet pas d’avoir des espaces de travail privés. De plus, un mur d’écran reste un périphérique d’affichage physique qui est coûteux à agrandir lorsque la quantité de contenu à afficher devient trop grande.Dans ce manuscrit, j’explore la combinaison de la Réalité Augmentée (RA) et d’un mur d’écrans, pour premièrement, ajouter un affichage d’informations personnalisées; et deuxièmement, étendre virtuellement l’espace de rendu. Nous concentrons nos efforts sur l’étude de la synergie de ces deux dispositifs à travers deux questions de recherche :QR1 - Comment la RA peut-elle aider à l’exploration et la navigation d’un réseau sur un mur d’écrans, en ayant un visuel et une navigation personnelle ?QR2 - Comment l’extension du mur d’écran avec la RA change-t-elle l’exploitation de l’espace par les utilisateurs durant une session de collaboration ?Pour RQ1, j’étudie le cas de l’utilisation de la RA avec un mur d’écrans pour la navigation de réseaux. Nous présentons 4 techniques, proposant différents mécanismes d’aide visuelle à la navigation. Nous évaluons ensuite ces 4 techniques lors de deux expériences, une de parcours de chemin de faible précision (sélection de chemin) et une autre de parcours de chemin de haute précision (tracé de chemin). Pour la tâche de sélection de chemin, une connexion persistante entre le curseur en RA et le réseau sur le mur permet de meilleur résultat.Dans le cas du tracé de chemin, nous observons qu’une connexion plus légère offre une meilleure performance. De plus, nous montrons la viabilité d’un système où l’interaction se fait en RA, pour la navigation de réseau sur des écrans partagés.Pour RQ2, je porte mon attention sur l’espace disponible devant le mur d’écrans, et envisage l’utilisation de cet espace pour étendre le mur d’écran avec de la RA. Je présente un système, étendant le mur d’écrans grâce à la RA en utilisant les dimensions de l’espace disponible devant le Mur. Nous comparons notre système avec un mur d’écran seul avec deux tâches collaboratives.Nous observons qu’avec notre système, les utilisateurs utilisent principalement l’espace virtuel. Bien que cela crée un surcoût pour l’interaction, nous n’observons aucune différence de performance avec le mur d’écran seul, et nous observons un vrai bénéfice à cet espace supplémentaire.La complexité de mise en place des systèmes précédemment étudiés nous a amené à étudier une manière moins coûteuse d’utiliser un mur d’écran: QR3 - Peut-on émuler un mur d’écran avec un casque de RV ?Cette question ayant plusieurs facettes, je me concentre sur la capacité des casques de RV d’atteindre la résolution nécessaire à répliquer l’expérience d’utilisation d’un mur haute résolution. Pour ce faire, nous étudions le modèle optique des casques de RV et le comparons au modèle de la vision humaine. Notre analyse indique que les casques actuels n’ont pas une résolution suffisante pour émuler un mur de haute résolution.Nous confirmons cette analyse avec une experience pilote qui compare un mur d’écrans, deux casques de RV et une émulation d’un casque de RV parfait.Je conclus ce manuscrit en discutant plus largement des différentes manières pour les murs d’écran d’être complétés par les casques d’AR et des implications de leur remplacement par des casques de VR/AR, et j’élabore sur les directions de recherches qu’ouvre cette thèse<br>The amount of data produced every day is constantly growing. This mass of raw information requires finding solutions to analyze and visualize it.Wall displays are a good candidate to solve this issue, allowing to display a large amount of data.However, they are shared devices made to accommodate several people, which does not allow for the existence of a private workspace. Moreover, wall displays are still physical devices that can not be easily extended when the content to display becomes too large.In this manuscript, I explore the combination of Augmented Reality (AR) and a wall display to: first, add a personalized information space onto a wall display; and second, extend the rendering space. We focus on using an augmented reality headset (AR-HMD) to display content complementary to the context offered by the wall. We study the synergy of these two devices through two research questions: RQ1 - How can AR assist exploration and navigation of a network on a wall display, through personal view and navigation?RQ2 - How does extending the screen space of a wall display using AR changes users’ space exploitation during a collaborative session?For RQ1, I study the case of using AR with a wall display for navigating networks. I present four techniques, proposing different visual aid mechanisms for navigation. I then evaluate these four techniques in two experiments, one for low accuracy path following in the network (path selection) and one for high accuracy path following (path tracing). We report that for the path selection task, a persistent connection between the cursor in AR and the network on the wall obtained better results. In the case of path tracing, we observe that a lighter connection offered better performance. Moreover, we show the feasibility of a system where the interaction is done privately in AR for network navigation on shared screens.For RQ2, I focus on the space available in front of the wall display and consider using AR to extend the wall display. I present a system extending a wall display with AR, taking advantage of the space in front of the wall. We compare our system that combines a wall display with AR with a wall display alone using two collaborative tasks.We observe that users extensively use the available virtual space with our system. Although this creates an additional cost of interaction, we observe no performance difference and a real benefit of this extra space.The complexity of setting up the previously studied systems led us to study a cheaper way to use a wall display: RQ3 - Can we emulate a wall display inside a VR headset?This question has many aspects, and I focus on the capacity for VR headsets to reach the necessary resolution to replicate the user experience of a high-resolution wall display. For this, we study the optic model of VR headsets and compare it to the human vision model. Our analysis indicates that current headsets need a higher resolution to emulate a wall display. We confirm our analysis by running a pilot study comparing a wall display, two VR headsets, and an emulation of a perfect VR headset.I conclude this manuscript by discussing the different ways for a wall display to be combined with AR headsets, the requirements for replacing wall displays by VR/AR headsets, and by elabo- rating on the future work this thesis opens
APA, Harvard, Vancouver, ISO, and other styles
20

Hamza-Lup, Felix George. "DYNAMIC SHARED STATE MAINTENANCE IN DISTRIBUTED VIRTUAL ENVIRONMENTS." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4407.

Full text
Abstract:
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual components. One of the challenges in such environments is maintaining a consistent view of the dynamic shared state in the presence of inevitable network latency and jitter. A consistent view of the shared scene will significantly increase the sense of presence among participants and facilitate their interactive collaboration. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. A review of the literature illustrates that the techniques for consistency maintenance in distributed Virtual Reality (VR) environments can be roughly grouped into three categories: centralized information management, prediction through dead reckoning algorithms, and frequent state regeneration. Additional resource management methods can be applied across these techniques for shared state consistency improvement. Some of these techniques are related to the systems infrastructure, others are related to the human nature of the participants (e.g., human perceptual limitations, area of interest management, and visual and temporal perception). An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability.<br>Ph.D.<br>School of Computer Science;<br>Engineering and Computer Science;<br>Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
21

Chadalavada, Ravi Teja. "Human Robot Interaction for Autonomous Systems in Industrial Environments." Thesis, Chalmers University of Technology, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-55277.

Full text
Abstract:
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
APA, Harvard, Vancouver, ISO, and other styles
22

Davis, Jr Larry Dennis. "CONFORMAL TRACKING FOR VIRTUAL ENVIRONMENTS." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4393.

Full text
Abstract:
A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments. Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes. The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes. To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm. Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed. Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position. These results constitute an order of magnitude improvement over current marker-based tracking probes in orientation, indicating the benefits of a conformal tracking approach. Also, this result translates to a predicted positional overlay error of a virtual object presented at 1m of less than 0.5 mm, which is well above reported overlay performance in virtual environments.<br>Ph.D.<br>Department of Electrical and Computer Engineering<br>Engineering and Computer Science;<br>Electrical & Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
23

Lam, Benny, and Jakob Nilsson. "Creating Good User Experience in a Hand-Gesture-Based Augmented Reality Game." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156878.

Full text
Abstract:
The dissemination of new innovative technology requires feasibility and simplicity. The problem with marker-based augmented reality is similar to glove-based hand gesture recognition: they both require an additional component to function. This thesis investigates the possibility of combining markerless augmented reality together with appearance-based hand gesture recognition by implementing a game with good user experience. The methods employed in this research consist of a game implementation and a pre-study meant for measuring interactive accuracy and precision, and for deciding upon which gestures should be utilized in the game. A test environment was realized in Unity using ARKit and Manomotion SDK. Similarly, the implementation of the game used the same development tools. However, Blender was used for creating the 3D models. The results from 15 testers showed that the pinching gesture was the most favorable one. The game was evaluated with a System Usability Scale (SUS) and received a score of 70.77 among 12 game testers, which indicates that the augmented reality game, which interaction method is solely based on bare-hands, can be quite enjoyable.
APA, Harvard, Vancouver, ISO, and other styles
24

Júnior, João Luiz Bernardes. "Modelo abrangente e reconhecimento de gestos com as mãos livres para ambientes 3D." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-19012011-114850/.

Full text
Abstract:
O principal objetivo deste trabalho é possibilitar o reconhecimento de gestos com as mãos livres, para uso em interação em ambientes 3D, permitindo que gestos sejam selecionados, para cada contexto de interação, dentre um grande conjunto de gestos possíveis. Esse grande conjunto deve aumentar a probabilidade de que se possa selecionar gestos já existentes no domínio de cada aplicação ou com associações lógicas claras com as ações que comandam e, assim, facilitar o aprendizado, memorização e uso dos gestos. Estes são requisitos importantes para aplicações em entretenimento e educação, que são os principais alvos deste trabalho. Propõe-se um modelo de gestos que, baseado em uma abordagem linguística, os divide em três componentes: postura e movimento da mão e local onde se inicia. Combinando números pequenos de cada um destes componentes, este modelo permite a definição de dezenas de milhares de gestos, de diferentes tipos. O reconhecimento de gestos assim modelados é implementado por uma máquina de estados finitos com regras explícitas que combina o reconhecimento de cada um de seus componentes. Essa máquina só utiliza a hipótese que os gestos são segmentados no tempo por posturas conhecidas e nenhuma outra relacionada à forma como cada componente é reconhecido, permitindo seu uso com diferentes algoritmos e em diferentes contextos. Enquanto este modelo e esta máquina de estados são as principais contribuições do trabalho, ele inclui também o desenvolvimento de algoritmos simples mas inéditos para reconhecimento de doze movimentos básicos e de uma grande variedade de posturas usando equipamento bastante acessível e pouca preparação. Inclui ainda um framework modular para reconhecimento de gestos manuais em geral, que também pode ser aplicado a outros domínios e com outros algoritmos. Além disso, testes realizados com usuários levantam diversas questões relativas a essa forma de interação. Mostram também que o sistema satisfaz os requisitos estabelecidos.<br>This work\'s main goal is to make possible the recognition of free hand gestures, for use in interaction in 3D environments, allowing the gestures to be selected, for each interaction context, from a large set of possible gestures. This large set must increase the probability of selecting a gesture which already exists in the application\'s domain or with clear logic association with the actions they command and, thus, to facilitate the learning, memorization and use of these gestures. These requirements are important to entertainment and education applications, this work\'s main targets. A gesture model is proposed that, based on a linguistic approach, divides them in three components: hand posture and movement and the location where it starts. Combining small numbers for each of these components, this model allows the definition of tens of thousands of gestures, of different types. The recognition of gestures so modeled is implemented by a finite state machine with explicit rules which combines the recognition of each of its components. This machine only uses the hypothesis that gestures are segmented in time by known posture, and no other related to the way in which each component is recognized, allowing its use with different algorithms and in different contexts. While this model and this finite state machine are this work\'s main contributions, it also includes the development of simple but novel algorithms for the recognition of twelve basic movements and a large variety of postures requiring highly accessible equipment and little setup. It likewise includes the development of a modular framework for the recognition of hand gestures in general, that may also be applied to other domains and algorithms. Beyond that, tests with users raise several questions about this form of interaction. They also show that the system satisfies the requirements set for it.
APA, Harvard, Vancouver, ISO, and other styles
25

(5930543), Drew A. Berger. "The Acceptance and Use of Augmented Reality in a Manufacturing Environment." Thesis, 2019.

Find full text
Abstract:
In this study, the researchers illuminated the positive advantages of incorporating augmented reality (AR) technology into the daily practices of service engineers working in an advanced manufacturing environment. AR technology improved the user’s communication with colleagues and content experts through real-time video conferencing and brought valuable information directly to the user on a mobile platform. This effective communication had the potential to reduce the time it takes to complete a work task, even when the user is in a remote location. However, it could not be assumed that people would be willing to use this new technology just because it was available. In order to promote the positive advantages of incorporating AR technology into the daily practices of service engineers, more research was needed to assess the user’s perceived value of AR technology and their willingness to accept AR technology into their daily tasks. The purpose of this research was to demonstrate the advantages of using augmented reality technology to improve communication and access to information as well as to assess the acceptance and use of this technology based on the behavioral intentions of a trained engineer. Using that information and the Unified Theory of Acceptance and Use of Technology including its extensions (UTAUT and UTAUT2) (Venkatesh, Morris, Davis, & Davis, 2003; Venkatesh, 2012) this research determined if AR technology is viable for larger scale adoption.
APA, Harvard, Vancouver, ISO, and other styles
26

(9371225), A'aeshah Abduallah Alhakamy. "Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments." Thesis, 2020.

Find full text
Abstract:
Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360<sup>o</sup> camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum.
APA, Harvard, Vancouver, ISO, and other styles
27

Rauh, Sebastian Felix. "Exploring the Potential of Head Worn Displays for Manual Work Tasks in Industrial Environments." Licentiate thesis, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206119.

Full text
Abstract:
In this thesis I explore the potential of modern Head-Worn Displays for aiding manual work tasks in the manufacturing industries. In such settings, workers are already supported by using mobile hand-held devices that show instructions and enable the worker to document work tasks. However, the most important disadvantage of hand-held devices is that users need to put them aside when performing tasks that require both of their hands. The current generation of Head-Worn Displays promises hands-free usage with little added complexity and also enables the augmentation of workers’ vision, thereby supporting the work task in a more effective and efficient way. For assessing the potential of Head-Worn Displays on factory floors, a series of studies has been conducted. The studies have been carried out directly on the production line of a German car manufacturer together with workers or in-lab, depending on the study goals. Together with workers and managers in the industrial settings we identified two work tasks whereby support for Head-Worn Displays showed good potential for increasing productivity, quality and worker comfort. The Head-Worn Display support was improved in an iterative manner within a Human-Centred Design approach. The thesis contributes with experiences on introducing Head-Worn Displays in real world settings and for long time periods. The recorded productivity increases attributed to the Head-Worn Displays are discussed, along with worker and manager feedback. For long-term use on a factory floor, extending battery operating time was found to be of central importance. CPU and Camera were identified as the most energy consuming devices and an approach to address that is presented. A benchmark suite is introduced to enable designers, developers, and project managers to make informed decisions when selecting Head-Worn Displays. Finally, a theoretical discussion of Head-Worn Displays is presented by situating them in a sense-based Augmented Reality taxonomy, I proposed.<br><p>QC 20170426</p>
APA, Harvard, Vancouver, ISO, and other styles
28

"Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality." Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.43962.

Full text
Abstract:
abstract: Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona’s Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.<br>Dissertation/Thesis<br>Doctoral Dissertation Mechanical Engineering 2017
APA, Harvard, Vancouver, ISO, and other styles
29

Oda, Ohan. "Supporting Multi-User Interaction in Co-Located and Remote Augmented Reality by Improving Reference Performance and Decreasing Physical Interference." Thesis, 2016. https://doi.org/10.7916/D8BC3ZCK.

Full text
Abstract:
One of the most fundamental components of our daily lives is social interaction, ranging from simple activities, such as purchasing a donut in a bakery on the way to work, to complex ones, such as instructing a remote colleague how to repair a broken automobile. While we interact with others, various challenges may arise, such as miscommunication or physical interference. In a bakery, a clerk may misunderstand the donut at which a customer was pointing due to the uncertainty of their finger direction. In a repair task, a technician may remove the wrong bolt and accidentally hit another user while replacing broken parts due to unclear instructions and lack of attention while communicating with a remote advisor. This dissertation explores techniques for supporting multi-user 3D interaction in augmented reality in a way that addresses these challenges. Augmented Reality (AR) refers to interactively overlaying geometrically registered virtual media on the real world. In particular, we address how an AR system can use overlaid graphics to assist users in referencing local objects accurately and remote objects efficiently, and prevent co-located users from physically interfering with each other. My thesis is that our techniques can provide more accurate referencing for co-located and efficient referencing for remote users and lessen interference among users. First, we present and evaluate an AR referencing technique for shared environments that is designed to improve the accuracy with which one user (the indicator) can point out a real physical object to another user (the recipient). Our technique is intended for use in otherwise unmodeled environments in which objects in the environment, and the hand of the indicator, are interactively observed by a depth camera, and both users wear tracked see-through displays. This technique allows the indicator to bring a copy of a portion of the physical environment closer and indicate a selection in the copy. At the same time, the recipient gets to see the indicator's live interaction represented virtually in another copy that is brought closer to the recipient, and is also shown the mapping between their copy and the actual portion of the physical environment. A formal user study confirms that our technique performs significantly more accurately than comparison techniques in situations in which the participating users have sufficiently different views of the scene. Second, we extend the idea of using a copy (virtual replica) of physical object to help a remote expert assist a local user in performing a task in the local user's environment. We develop an approach that uses Virtual Reality (VR) or AR for the remote expert, and AR for the local user. It allows the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. The expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains. We compared our approach with another 3D approach that also uses virtual replicas, in which the remote expert identifies corresponding pairs of points to align on a pair of objects, and a 2D approach in which the expert uses a 2D tablet-based drawing system similar to sketching systems developed for prior work by others on remote assistance. The study shows the 3D demonstration approach to be faster than the others. Third, we present an interference avoidance technique (Redirected Motion) intended to lessen the chance of physical interference among users with tracked hand-held displays, while minimizing their awareness that the technique is being applied. This interaction technique warps virtual space by shifting the virtual location of a user's hand-held display. We conducted a formal user study to evaluate Redirected Motion against other approaches that either modify what a user sees or hears, or restrict the interaction capabilities users have. Our study was performed using a game we developed, in which two players moved their hand-held displays rapidly in the space around a shared gameboard. Our analysis showed that Redirected Motion effectively and imperceptibly kept players further apart physically than the other techniques. These interaction techniques were implemented using an extensible programming framework we developed for supporting a broad range of multi-user immersive AR applications. This framework, Goblin XNA, integrates a 3D scene graph with support for 6DOF tracking, rigid body physics simulation, networking, shaders, particle systems, and 2D user interface primitives. In summary, we showed that our referencing approaches can enhance multi-user AR by improving accuracy for co-located users and increasing efficiency for remote users. In addition, we demonstrated that our interference-avoidance approach can lessen the chance of unwanted physical interference between co-located users, without their being aware of its use.
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, L., W. Tang, N. W. John, Tao Ruan Wan, and J. J. Zhang. "Context-aware mixed reality: A learning-based framework for semantic-level interaction." 2019. http://hdl.handle.net/10454/17543.

Full text
Abstract:
Yes<br>Mixed reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic‐based interactive MR framework that is beyond current geometry‐based approaches, offering a step change in generating high‐level context‐aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object‐specific behaviours, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real‐world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material‐aware prototype system for context‐aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real‐time semantic‐level interactions.
APA, Harvard, Vancouver, ISO, and other styles
31

(8803076), Jordan M. McGraw. "Implementation and Analysis of Co-Located Virtual Reality for Scientific Data Visualization." Thesis, 2020.

Find full text
Abstract:
<div>Advancements in virtual reality (VR) technologies have led to overwhelming critique and acclaim in recent years. Academic researchers have already begun to take advantage of these immersive technologies across all manner of settings. Using immersive technologies, educators are able to more easily interpret complex information with students and colleagues. Despite the advantages these technologies bring, some drawbacks still remain. One particular drawback is the difficulty of engaging in immersive environments with others in a shared physical space (i.e., with a shared virtual environment). A common strategy for improving collaborative data exploration has been to use technological substitutions to make distant users feel they are collaborating in the same space. This research, however, is focused on how virtual reality can be used to build upon real-world interactions which take place in the same physical space (i.e., collaborative, co-located, multi-user virtual reality).</div><div><br></div><div>In this study we address two primary dimensions of collaborative data visualization and analysis as follows: [1] we detail the implementation of a novel co-located VR hardware and software system, [2] we conduct a formal user experience study of the novel system using the NASA Task Load Index (Hart, 1986) and introduce the Modified User Experience Inventory, a new user study inventory based upon the Unified User Experience Inventory, (Tcha-Tokey, Christmann, Loup-Escande, Richir, 2016) to empirically observe the dependent measures of Workload, Presence, Engagement, Consequence, and Immersion. A total of 77 participants volunteered to join a demonstration of this technology at Purdue University. In groups ranging from two to four, participants shared a co-located virtual environment built to visualize point cloud measurements of exploded supernovae. This study is not experimental but observational. We found there to be moderately high levels of user experience and moderate levels of workload demand in our results. We describe the implementation of the software platform and present user reactions to the technology that was created. These are described in detail within this manuscript.</div>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography