Literatura académica sobre el tema "Imagerie augmentée"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Imagerie augmentée".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Imagerie augmentée":

1

Meier, Walter N., Michael L. Van Woert y Cheryl Bertoia. "Evaluation of operational SSM/I ice-concentration algorithms". Annals of Glaciology 33 (2001): 102–8. http://dx.doi.org/10.3189/172756401781818509.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
AbstractThe United States National Ice Center (NIC) provides weekly ice analyses of the Arctic and Antarctic using information from ice reconnaissance, ship reports and high-resolution satellite imagery. In cloud-covered areas and regions lacking imagery, the higher-resolution sources are augmented by ice concentrations derived from Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I) passive-microwave imagery. However, the SSM/I-derived ice concentrations are limited by low resolution and uncertainties in thin-ice regions. Ongoing research at NIC is attempting to improve the utility of these SSM/I products for operational sea-ice analyses. The refinements of operational algorithms may also aid future scientific studies. Here we discuss an evaluation of the standard operational ice-concentration algorithm, Cal/Val, with a possible alternative, a modified NASA Team algorithm. The modified algorithm compares favorably with Cal/Val and is a substantial improvement over the standard NASA Team algorithm in thin-ice regions that are of particular interest to operational activities.
2

Lin, Xin y Arthur Y. Hou. "Evaluation of Coincident Passive Microwave Rainfall Estimates Using TRMM PR and Ground Measurements as References". Journal of Applied Meteorology and Climatology 47, n.º 12 (1 de diciembre de 2008): 3170–87. http://dx.doi.org/10.1175/2008jamc1893.1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract This study compares instantaneous rainfall estimates provided by the current generation of retrieval algorithms for passive microwave sensors using retrievals from the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and merged surface radar and gauge measurements over the continental United States as references. The goal is to quantitatively assess surface rain retrievals from cross-track scanning microwave humidity sounders relative to those from conically scanning microwave imagers. The passive microwave sensors included in the study are three operational sounders—the Advanced Microwave Sounding Unit-B (AMSU-B) instruments on the NOAA-15, -16, and -17 satellites—and five imagers: the TRMM Microwave Imager (TMI), the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) instrument on the Aqua satellite, and the Special Sensor Microwave Imager (SSM/I) instruments on the Defense Meteorological Satellite Program (DMSP) F-13, -14, and -15 satellites. The comparisons with PR data are based on “coincident” observations, defined as instantaneous retrievals (spatially averaged to 0.25° latitude and 0.25° longitude) within a 10-min interval collected over a 20-month period from January 2005 to August 2006. Statistics of departures of these coincident retrievals from reference measurements as given by the TRMM PR or ground radar and gauges are computed as a function of rain intensity over land and oceans. Results show that over land AMSU-B sounder rain retrievals are comparable in quality to those from conically scanning radiometers for instantaneous rain rates between 1.0 and 10.0 mm h−1. This result holds true for comparisons using either TRMM PR estimates over tropical land areas or merged ground radar/gauge measurements over the continental United States as the reference. Over tropical oceans, the standard deviation errors are comparable between imager and sounder retrievals for rain intensities above 5 mm h−1, below which the imagers are noticeably better than the sounders; systematic biases are small for both imagers and sounders. The results of this study suggest that in planning future satellite missions for global precipitation measurement, cross-track scanning microwave humidity sounders on operational satellites may be used to augment conically scanning microwave radiometers to provide improved temporal sampling over land without degradation in the quality of precipitation estimates.
3

Chancia, Robert, Jan van Aardt, Sarah Pethybridge, Daniel Cross y John Henderson. "Predicting Table Beet Root Yield with Multispectral UAS Imagery". Remote Sensing 13, n.º 11 (2 de junio de 2021): 2180. http://dx.doi.org/10.3390/rs13112180.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Timely and accurate monitoring has the potential to streamline crop management, harvest planning, and processing in the growing table beet industry of New York state. We used unmanned aerial system (UAS) combined with a multispectral imager to monitor table beet (Beta vulgaris ssp. vulgaris) canopies in New York during the 2018 and 2019 growing seasons. We assessed the optimal pairing of a reflectance band or vegetation index with canopy area to predict table beet yield components of small sample plots using leave-one-out cross-validation. The most promising models were for table beet root count and mass using imagery taken during emergence and canopy closure, respectively. We created augmented plots, composed of random combinations of the study plots, to further exploit the importance of early canopy growth area. We achieved a R2 = 0.70 and root mean squared error (RMSE) of 84 roots (~24%) for root count, using 2018 emergence imagery. The same model resulted in a RMSE of 127 roots (~35%) when tested on the unseen 2019 data. Harvested root mass was best modeled with canopy closing imagery, with a R2 = 0.89 and RMSE = 6700 kg/ha using 2018 data. We applied the model to the 2019 full-field imagery and found an average yield of 41,000 kg/ha (~40,000 kg/ha average for upstate New York). This study demonstrates the potential for table beet yield models using a combination of radiometric and canopy structure data obtained at early growth stages. Additional imagery of these early growth stages is vital to develop a robust and generalized model of table beet root yield that can handle imagery captured at slightly different growth stages between seasons.
4

Logaldo, Mara. "Augmented Bodies: Functional and Rhetorical Uses of Augmented Reality in Fashion". Pólemos 10, n.º 1 (1 de abril de 2016): 125–41. http://dx.doi.org/10.1515/pol-2016-0007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract Augmented Reality (AR) is increasingly changing our perception of the world. The spreading of Quick Response (QR), Radio Frequency (RFID) and AR tags has provided ways to enrich physical items with digital information. By a process of alignment the codes can be read by the cameras contained in handheld devices or special equipment and add computer-generated contents – including 3-D imagery – to real objects in real time. As a result, we feel we belong to a multi-layered dimension, to a mixed environment where the real and the virtual partly overlap. Fashion has been among the most responsive domains to this new technology. Applications of AR in the field have already been numerous and diverse: from Magic Mirrors in department stores to 3-D features in fashion magazines; from augmented fashion shows, where models are covered with tags or transformed into walking holograms, to advertisements consisting exclusively of more or less magnified QR codes. Bodies are thus at the same time augmented and encrypted, offered to the eye of the digital camera to be transfigured and turned into a secret language which, among other functions, can also have that of becoming a powerful tool to bypass censorship.
5

Mihara, Masahito, Hiroaki Fujimoto, Noriaki Hattori, Hironori Otomune, Yuta Kajiyama, Kuni Konaka, Yoshiyuki Watanabe et al. "Effect of Neurofeedback Facilitation on Poststroke Gait and Balance Recovery". Neurology 96, n.º 21 (20 de abril de 2021): e2587-e2598. http://dx.doi.org/10.1212/wnl.0000000000011989.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
ObjectiveTo test the hypothesis that supplementary motor area (SMA) facilitation with functional near-infrared spectroscopy–mediated neurofeedback (fNIRS-NFB) augments poststroke gait and balance recovery, we conducted a 2-center, double-blind, randomized controlled trial involving 54 Japanese patients using the 3-meter Timed Up and Go (TUG) test.MethodsPatients with subcortical stroke-induced mild to moderate gait disturbance more than 12 weeks from onset underwent 6 sessions of SMA neurofeedback facilitation during gait- and balance-related motor imagery using fNIRS-NFB. Participants were randomly allocated to intervention (28 patients) or placebo (sham: 26 patients). In the intervention group, the fNIRS signal contained participants' cortical activation information. The primary outcome was TUG improvement 4 weeks postintervention.ResultsThe intervention group showed greater improvement in the TUG test (12.84 ± 15.07 seconds, 95% confidence interval 7.00–18.68) than the sham group (5.51 ± 7.64 seconds, 95% confidence interval 2.43–8.60; group difference 7.33 seconds, 95% CI 0.83–13.83; p = 0.028), even after adjusting for covariates (group × time interaction; F1.23,61.69 = 4.50, p = 0.030, partial η2 = 0.083). Only the intervention group showed significantly increased imagery-related SMA activation and enhancement of resting-state connectivity between SMA and ventrolateral premotor area. Adverse effects associated with fNIRS-mediated neurofeedback intervention were absent.ConclusionSMA facilitation during motor imagery using fNIRS neurofeedback may augment poststroke gait and balance recovery by modulating the SMA and its related network.Classification of EvidenceThis study provides Class III evidence that for patients with gait disturbance from subcortical stroke, SMA neurofeedback facilitation improves TUG time (UMIN000010723 at UMIN-CTR; umin.ac.jp/english/).
6

Neumann, Ulrich, Suya You, Jinhui Hu, Bolan Jiang y Ismail Oner Sebe. "Visualizing Reality in an Augmented Virtual Environment". Presence: Teleoperators and Virtual Environments 13, n.º 2 (abril de 2004): 222–33. http://dx.doi.org/10.1162/1054746041382366.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
An Augmented Virtual Environment (AVE) fuses dynamic imagery with 3D models. An AVE provides a unique approach to visualizing spatial relationships and temporal events that occur in real-world environments. A geometric scene model provides a 3D substrate for the visualization of multiple image sequences gathered by fixed or moving image sensors. The resulting visualization is that of a world-in-miniature that depicts the corresponding real-world scene and dynamic activities. This paper describes the core elements of an AVE system, including static and dynamic model construction, sensor tracking, and image projection for 3D visualization.
7

Gomes, José Duarte Cardoso, Mauro Jorge Guerreiro Figueiredo, Lúcia da Graça Cruz Domingues Amante y Cristina Maria Cardoso Gomes. "Augmented Reality in Informal Learning Environments". International Journal of Creative Interfaces and Computer Graphics 7, n.º 2 (julio de 2016): 39–55. http://dx.doi.org/10.4018/ijcicg.2016070104.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Augmented Reality (AR) allows computer-generated imagery information to be overlaid onto a live real world environment in real-time. Technological advances in mobile computing devices (MCD) such as smartphones and tablets (internet access, built-in cameras and GPS) made a greater number of AR applications available. This paper presents the Augmented Reality Musical Gallery (ARMG) exhibition, enhanced by AR. ARMG focuses the twentieth century music history and it is aimed to students from the 2nd Cycle of basic education in Portuguese public schools. In this paper, we will introduce the AR technology and address topics as constructivism, art education, student motivation, and informal learning environments. We conclude by presenting the first part of the ongoing research conducted among a sample group of students contemplating the experiment in educational context.
8

Gawehn, Matthijs, Rafael Almar, Erwin W. J. Bergsma, Sierd de Vries y Stefan Aarninkhof. "Depth Inversion from Wave Frequencies in Temporally Augmented Satellite Video". Remote Sensing 14, n.º 8 (12 de abril de 2022): 1847. http://dx.doi.org/10.3390/rs14081847.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Optical satellite images of the nearshore water surface offer the possibility to invert water depths and thereby constitute the underlying bathymetry. Depth inversion techniques based on surface wave patterns can handle clear and turbid waters in a variety of global coastal environments. Common depth inversion algorithms require video from shore-based camera stations, UAVs or Xband-radars with a typical duration of minutes and at framerates of 1–2 fps to find relevant wave frequencies. These requirements are often not met by satellite imagery. In this paper, satellite imagery is augmented from a sequence of 12 images of Capbreton, France, collected over a period of ∼1.5 min at a framerate of 1/8 fps by the Pleiades satellite, to a pseudo-video with a framerate of 1 fps. For this purpose, a recently developed method is used, which considers spatial pathways of propagating waves for temporal video reconstruction. The augmented video is subsequently processed with a frequency-based depth inversion algorithm that works largely unsupervised and is openly available. The resulting depth estimates approximate ground truth with an overall depth bias of −0.9 m and an interquartile range of depth errors of 5.1 m. The acquired accuracy is sufficiently high to correctly predict wave heights over the shoreface with a numerical wave model and to find hotspots where wave refraction leads to focusing of wave energy that has potential implications for coastal hazard assessments. A more detailed depth inversion analysis of the nearshore region furthermore demonstrates the possibility to detect sandbars. The combination of image augmentation with a frequency-based depth inversion method shows potential for broad application to temporally sparse satellite imagery and thereby aids in the effort towards globally available coastal bathymetry data.
9

Kuny, S., H. Hammer y A. Thiele. "CNN BASED VEHICLE TRACK DETECTION IN COHERENT SAR IMAGERY: AN ANALYSIS OF DATA AUGMENTATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (30 de mayo de 2022): 93–98. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-93-2022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract. The coherence image as a product of a coherent SAR image pair can expose even subtle changes in the surface of a scene, such as vehicle tracks. For machine learning models, the large amount of required training data often is a crucial issue. A general solution for this is data augmentation. Standard techniques, however, were predominantly developed for optical imagery, thus do not account for SAR specific characteristics and thus are only partially applicable to SAR imagery. In this paper several data augmentation techniques are investigated for their performance impact regarding a CNN based vehicle track detection with the aim of generating an optimized data set. Quantitative results are shown on the performance comparison. Furthermore, the performance of the fully-augmented data set is put into relation to the training with a large non-augmented data set.
10

Bernardes, Sergio, Margueritte Madden, Ashurst Walker, Andrew Knight, Nicholas Neel, Akshay Mendki, Dhaval Bhanderi, Andrew Guest, Shannon Healy y Thomas Jordan. "Emerging Geospatial Technologies in Environmental Research, Education, and Outreach". Geosfera Indonesia 5, n.º 3 (30 de diciembre de 2020): 352. http://dx.doi.org/10.19184/geosi.v5i3.20719.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Drawing on the historical importance of visual interpretation for image understanding and knowledge discovery, emerging technologies in geovisualization are incorporated into research, education and outreach at the Center for Geospatial Research (CGR) in the Department of Geography at the University of Georgia (UGA), USA. This study aimed to develop the 3D Immersion and Geovisualization (3DIG) system consisting of uncrewed aerial systems (UAS) for data acquisition, augmented and virtual reality headsets and mobile devices, an augmented reality digital sandbox, and a video wall. We were working together integrated data products from the UAS imagery, including digital image mosaics and 3D models, and readily available gaming engine software to create augmented and virtual reality immersive visualizations. The use of 3DIG in research is demonstrated in a case study documenting the seasonal growth of vegetables in small gardens with a time series of 3D crop models generated from UAS imagery and Structure from Motion photogrammetry. Demonstrations of 3DIG in geography and geology courses, as well as public events, also indicate the benefits of emerging geospatial technologies for creating active learning environments and fostering participatory community engagement. Keywords: Environmental Education; Geovisualization; Augmented Reality; Virtual Reality; UAS, Photogrammetry Copyright (c) 2020 Geosfera Indonesia Journal and Department of Geography Education, University of Jember This work is licensed under a Creative Commons Attribution-Share A like 4.0 International License

Tesis sobre el tema "Imagerie augmentée":

1

Poirier, Stéphane. "Estimation de pose omnidirectionnelle dans un contexte de réalité augmentée". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28703/28703.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Estimer la pose de la caméra est un défi fondamental en réalité augmentée et permet la superposition d’un modèle à la réalité. Estimer précisément la pose est souvent critique en ingénierie d’infrastructures. Les images omnidirectionnelles ont un champ de vision supérieur aux images planaires communément utilisées en RA. Cette propriété peut bénéficier à l’estimation de la pose. Or, aucun travail ne présente de résultats montrant clairement un gain de précision. Notre objectif est de quantifier la précision de l’estimation de pose omnidirectionnelle et la tester en pratique. Nous proposons une méthode d’estimation de pose pour images omnidirectionnelles et en avons mesuré la précision par des simulations automatisées. Les résultats obtenus confirment que le champ de vision large des images omnidirectionnelles permet d’atteindre une précision de pose supérieure à celle d’images planaires. Nous avons également testé notre méthode sur des données tirées d’environnements réels et discutons les défis et limitations à son utilisation en pratique.
Camera pose estimation is a fundamental problem of augmented reality, and enables registration of a model to the reality. An accurate estimate of the pose is often critical in infrastructure engineering. Omnidirectional images cover a larger field of view than planar images commonly used in AR. This property can be beneficial to pose estimation. However, no existing work present results clearly showing accuracy gains. Our objective is therefore to quantify the accuracy of omnidirectional pose estimation and test it in practice. We propose a pose estimation method for omnidirectional images and have measured its accuracy using automated simulations. Our results show that the large field of view of omnidirectional images increases pose accuracy, compared to poses from planar images. We also tested our method in practice, using data from real environments and discuss challenges and limitations to its use in practice.
2

Maman, Didier. "Recalage de modèles tridimensionnels sur des images réelles : application à la modélisation interactive d'environnement par des techniques de réalité augmentée". Paris, ENMP, 1998. http://www.theses.fr/1998ENMP0820.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Lors d'une intervention teleoperee, la connaissance du modele geometrique du site distant peut permettre au systeme informatique de fournir une assistance precieuse à l'operateur. Il peut par exemple empecher des collisions entre le robot et des objets du site ou contraindre le mouvement d'un outil relativement a la geometrie de la piece qu'il doit usiner. Ces modeles peuvent egalement permettre au systeme de realiser certaines taches de maniere autonome ou d'ajouter des indicateurs visuels en surimpression sur l'image video. Afin d'obtenir ces modeles rapidement et de facon fiable, nous envisageons une collaboration entre l'operateur et un systeme de vision. L'operateur, qui controle son action a travers une interface de visualisation stereoscopique, reconnait immediatement et de maniere selective les objets d'interet pour la mission. Il est egalement capable d'indiquer leur localisation approximative par simple retour visuel. Le systeme de vision, fortement aide par ces pre-positionnements, peut a son tour determiner la localisation des objets avec une plus grande precision. Un algorithme de recalage automatique qui s'appuie sur les contours visibles des objets est employe. En raison de contraintes de temps liees a l'application, le resultat du recalage automatique doit etre fourni rapidement. Pour tenter de repondre a ce besoin, la methode proposee combine deux types de representations pour les modeles d'objet. La premiere est une approximation polyedrique du modele. Elle est utilisee dans le but d'etablir une correspondance entre les contours du modele et les contours extraits des images. Le recalage au sens propre (minimisation de l'ecart) peut egalement se faire sur cette representation approximative. Il fournit dans ce cas une reponse rapide mais peu precise pour les objets courbes par nature. La deuxieme representation decrit exactement les contours courbes et permet de determiner avec plus de precision la configuration des modeles.
3

Mouktadiri, Ghizlane. "Angiovision - Pose d'endoprothèse aortique par angionavigation augmentée". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00943465.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Nous avons développé un modèle numérique du traitement des anévrismes de l'aorte abdominale (AAA) en utilisant l'analyse par éléments finis (FEA). L'objectif est de simuler les différentes étapes de la procédure endovasculaire dans la phase préopératoire. Pour cela, dans un premier temps, on a fait des études expérimentales Macro et Nano de rupture, indentation, traction, frottement des outils métalliques et des biomatériaux. Ensuite, on a développé une nouvelle approche de la conception des structures anatomiques très angulées et hétérogènes, et enfin, on a crée une maquette numérique de l'interaction outils métalliques et biomatériaux. Dans ce modèle, nous avons pris en compte la géométrie réelle reconstituée à partir des scans, une caractérisation locale des propriétés mécaniques guide / cathéter, une cartographie des propriétés des matériaux composites en fonction de la qualité pariétale, et une projection de l'environnement de l'artère dans l'outil de simulation. Nos résultats ont été validés grâce à un recalage entre les données cliniques et notre simulation pour un groupe donné de patients, dont les artères sont très tortueuses et calcifiées. Cet outil d'aide à l'acte chirurgical permet de contrôler avec précision la navigation endovasculaire en péropératoire, prédire la faisabilité de la chirurgie avec une fiabilité ainsi que d'optimiser le nombre d'outils métalliques pour chaque patient, en tenant compte du risque de rupture des zones avec fortes tortuosités et hétérogénéités.
4

Crespel, Thomas. "Optical and software tools for the design of a new transparent 3D display". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0366.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Nous vivons une époque exaltante où de nouveaux types d'écrans sont rendus possibles, et la communauté scientifique s'emploie à améliorer l'expérience utilisateur. Nous vivons notamment l'émergence d'écrans courbés, volumétriques, autostéréoscopiques, transparents ou même portés sur la tête, avec des capteurs et algorithmes de plus en plus complexes permettant des interactions toujours plus riches.Cette thèse vise à contribuer à la création de ces nouveaux types d'afficheurs. À travers trois projets concrets, nous associons l'optique et l'informatique pour répondre à des problématiques spécifiques, avec l'objectif final de créer un nouveau type d'écran 3D. Chacun de ces projets a mené au développement de prototypes basés sur l'utilisation de picoprojecteurs laser, de caméras, d'éléments optiques et de logiciels dédiés.Dans un premier projet, nous avons étudié les écrans sphériques : ceux-ci sont plus adaptés que les écrans classiques pour visualiser des données sphériques, cependant les solutions existantes sont onéreuses et difficiles à mettre en place. Nous proposons une méthode pour concevoir un écran sphérique tactile à moindre coût en utilisant seulement des optiques commerciales et peu onéreuses ainsi que des éléments créés par impression 3D, dans le but de rendre ces écrans plus accessibles et reproductibles. Notre solution utilise un picoprojecteur laser associé à un système optique permettant de projeter une image nette sur toute la sphère. L'aspect tactile est réalisé par suivi optique de doigts dans l'infrarouge et nous avons développé un logiciel permettant de gérer l'affichage et l'interaction. Nous compensons l'utilisation de matériel peu coûteux par des calibrations et des corrections logicielles.Nous avons ensuite largement étudié la technologie des guides "wedges" (en forme de "cale"), qui sont devenus des éléments essentiels du reste de la thèse. Les guides wedges ont été initialement développés pour des systèmes de projection plats, mais dans ce projet nous les utilisons dans un contexte d'acquisition. La problématique est la suivante : dans certaines configurations, une zone d'intérêt peut être difficile à imager avec une caméra standard à cause du manque d'espace en face de celle-ci. Nous proposons d'utiliser un guide wedge et un film prismatique afin de replier la distance nécessaire. Nous avons étudié et validé différentes applications dans le domaine spécifique de l'archéologie.Les compétences que nous avons développées au cours de ces deux projets nous ont permis d'imaginer et de concevoir un nouvel écran autostéréoscopique transparent. Un tel écran peut être vu comme une vitre permettant d'ajouter au monde réel des information 3D dépendantes du point de vue, et cela sans avoir besoin de porter de lunettes ou de casques. Le principe est d'utiliser un guide wedge avec des picoprojecteurs laser générant chacun un point de vue différent. Les points de vues sont répartis en face de l'écran par un élément optique holographique que nous avons spécialement conçu. Ce nouvel écran ouvre le champ à de nombreuses applications en réalité augmentée
We live exciting times where new types of displays are made possible, and current challenges focus on enhancing user experience. As examples, we witness the emergence of curved, volumetric, head-mounted, autostereoscopic, or transparent displays, among others, with more complex sensors and algorithms that enable sophisticated interactions.This thesis aims at contributing to the creation of such novel displays. In three concrete projects, we combine both optical and software tools to address specific applications with the ultimate goal of designing a three-dimensional display. Each of these projects led to the development of a working prototype based on the use of picoprojectors, cameras, optical elements, and custom software.In a first project, we investigated spherical displays: they are more suitable for visualizing spherical data than regular flat 2D displays, however, existing solutions are costly and difficult to build due to the requirement of tailored optics. We propose a low-cost multitouch spherical display that uses only off-the-shelf, low-cost, and 3D-printed elements to make it more accessible and reproducible. Our solution uses a focus-free projector and an optical system to cover a sphere from the inside, infrared finger tracking for multitouch interaction, and custom software to link both. We leverage the use of low-cost material by software calibrations and corrections.We then extensively studied wedge-shaped light guides, in which we see great potential and that became the center component of the rest of our work. Such light guides were initially devised for flat and compact projection-based displays but in this project we exploit them in a context of acquisition. We seek to image constrained locations that are not easily accessible with regular cameras due to the lack of space in front of the object of interest. Our idea is to fold the imaging distance into a wedge guide thanks to prismatic elements. With our prototype, we validated various applications in the archaeological field.The skills and expertise that we acquired during both projects allowed us to design a new transparent autostereoscopic display. Our solution overcomes some limitations of augmented reality displays allowing a user to see both a direct view of the real world as well as a stereoscopic and view-dependent augmentation without any wearable or tracking. The principle idea is to use a wedge light guide, a holographic optical element, and several projectors, each of them generating a different viewpoint. Our current prototype has five viewpoints, and more can be added. This new display has a wide range of potential applications in the augmented reality field
5

Meshkat, Alsadat Shabnam. "Analysis of camera pose estimation using 2D scene features for augmented reality applications". Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30281.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
La réalité augmentée (RA) a récemment eu un impact énorme sur les ingénieurs civils et les travailleurs de l'industrie de la construction, ainsi que sur leur interaction avec les plans ar-chitecturaux. La RA introduit une superposition du modèle 3D d'un bâtiment sur une image 2D non seulement comme une image globale, mais aussi potentiellement comme une repré-sentation complexe de ce qui va être construit et qui peut être visualisée par l'utilisateur. Pour insérer un modèle 3D, la caméra doit être localisée par rapport à son environnement. La lo-calisation de la caméra consiste à trouver les paramètres extérieurs de la caméra (i.e. sa po-sition et son orientation) par rapport à la scène observée et ses caractéristiques. Dans ce mémoire, des méthodes d'estimation de la pose de la caméra (position et orientation) par rapport à la scène utilisant des correspondances cercle-ellipse et lignes droites-lignes droites sont explorées. Les cercles et les lignes sont deux des caractéristiques géométriques qui sont principalement présentes dans les structures et les bâtiments. En fonction de la rela-tion entre les caractéristiques 3D et leurs images 2D correspondantes détectées dans l'image, la position et l'orientation de la caméra sont estimées.
Augmented reality (AR) had recently made a huge impact on field engineers and workers in construction industry, as well as the way they interact with architectural plans. AR brings in a superimposition of the 3D model of a building onto the 2D image not only as the big picture, but also as an intricate representation of what is going to be built. In order to insert a 3D model, the camera has to be localized regarding its surroundings. Camera localization con-sists of finding the exterior parameters (i.e. its position and orientation) of the camera with respect to the viewed scene and its characteristics. In this thesis, camera pose estimation methods using circle-ellipse and straight line corre-spondences has been investigated. Circles and lines are two of the geometrical features that are mostly present in structures and buildings. Based on the relationship between the 3D features and their corresponding 2D data detected in the image, the position and orientation of the camera is estimated.
6

Ferretti, Gilbert. "Endoscopie virtuelle des bronches : études pré-cliniques et cliniques". Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE19001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fabre, Diandra. "Retour articulatoire visuel par échographie linguale augmentée : développements et application clinique". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT076/document.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Dans le cadre de la rééducation orthophonique des troubles de la parole associés à un mauvais positionnement de la langue, il peut être utile au patient et à l’orthophoniste de visualiser la position et les mouvements de cet articulateur naturellement très peu visible. L’imagerie échographique peut pallier ce manque, comme en témoignent de nombreuses études de cas menées depuis plusieurs années dans les pays anglo-saxons. Appuyés par de nombreux travaux sur les liens entre production et perception de la parole, ces études font l’hypothèse que ce retour articulatoire visuel faciliterait la rééducation du patient. Lors des séances orthophoniques, le patient semble, en effet, mieux appréhender les déplacements de sa langue, malgré la difficulté d’interprétation sous-jacente de l’image échographique liée au bruit inhérent à l’image et à l’absence de vision des autres articulateurs. Nous développons dans cette thèse le concept d’échographie linguale augmentée. Nous proposons deux approches afin d’améliorer l’image échographique brute, et présentons une première application clinique de ce dispositif. La première approche porte sur le suivi du contour de la langue sur des images échographiques. Nous proposons une méthode basée sur une modélisation par apprentissage supervisé des relations entre l’intensité de l’ensemble des pixels de l’image et les coordonnées du contour de langue. Une étape de réduction de la dimension des images et des contours par analyse en composantes principales est suivie d’une étape de modélisation par réseaux de neurones. Nous déclinons des implémentations mono-locuteur et multi-locuteur de cette approche dont les performances sont évaluées en fonction de la quantité de contours manuellement annotés (données d’apprentissage). Nous obtenons pour des modèles mono-locuteur une erreur de 1,29 mm avec seulement 80 images, performance meilleure que celle de la méthode de référence EdgeTrak utilisant les contours actifs. La deuxième approche vise l’animation automatique, à partir des images échographiques, d’une tête parlante articulatoire, c’est-à-dire l’avatar d’un locuteur de référence qui révèle les structures externes comme internes de l’appareil vocal (palais, pharynx, dent, etc.). Nous construisons tout d’abord un modèle d’association entre les images échographiques et les paramètres de contrôle de la langue acquis sur ce locuteur de référence. Nous adaptons ensuite ce modèle à de nouveaux locuteurs dits locuteurs source. Pour cette adaptation, nous évaluons la technique Cascaded Gaussian Mixture Regression (C-GMR), qui s’appuie sur une modélisation conjointe des données échographiques du locuteur de référence, des paramètres de contrôle de la tête parlante, et des données échographique d’adaptation du locuteur source. Nous comparons cette approche avec une régression directe par GMR entre données du locuteur source et paramètre de contrôle de la tête parlante. Nous montrons que l’approche par C-GMR réalise le meilleur compromis entre quantité de données d’adaptation d’une part, et qualité de la prédiction d’autre part. Enfin, nous évaluons la capacité de généralisation de l’approche C-GMR et montrons que l’information a priori sur le locuteur de référence exploitée par ce modèle permet de généraliser à des configurations articulatoires du locuteur source non vues pendant la phase d’adaptation. Enfin, nous présentons les premiers résultats d’une application clinique de l’échographie augmentée à une population de patients ayant subi une ablation du plancher de la bouche ou d’une partie de la langue. Nous évaluons l’usage du retour visuel en temps réel de la langue du patient et l’usage de séquences enregistrées préalablement sur un orthophoniste pour illustrer les articulations cibles, par des bilans orthophoniques classiques pratiqués entre chaque série de séances. Les premiers résultats montrent une amélioration des performances des patients, notamment sur le placement de la langue
In the framework of speech therapy for articulatory troubles associated with tongue misplacement, providing a visual feedback might be very useful for both the therapist and the patient, as the tongue is not a naturally visible articulator. In the last years, ultrasound imaging has been successfully applied to speech therapy in English speaking countries, as reported in several case studies. The assumption that visual articulatory biofeedback may facilitate the rehabilitation of the patient is supported by studies on the links between speech production and perception. During speech therapy sessions, the patient seems to better understand his/her tongue movements, despite the poor quality of the image due to inherent noise and the lack of information about other speech articulators. We develop in this thesis the concept of augmented lingual ultrasound. We propose two approaches to improve the raw ultrasound image, and describe a first clinical application of this device.The first approach focuses on tongue tracking in ultrasound images. We propose a method based on supervised machine learning, where we model the relationship between the intensity of all the pixels of the image and the contour coordinates. The size of the images and of the contours is reduced using a principal component analysis, and a neural network models their relationship. We developed speaker-dependent and speaker-independent implementations and evaluated the performances as a function of the amount of manually annotated contours used as training data. We obtained an error of 1.29 mm for the speaker-dependent model with only 80 annotated images, which is better than the performance of the EdgeTrak reference method based on active contours.The second approach intends to automatically animate an articulatory talking head from the ultrasound images. This talking head is the avatar of a reference speaker that reveals the external and internal structures of the vocal tract (palate, pharynx, teeth, etc.). First, we build a mapping model between ultrasound images and tongue control parameters acquired on the reference speaker. We then adapt this model to new speakers referred to as source speakers. This adaptation is performed by the Cascaded Gaussian Mixture Regression (C-GMR) technique based on a joint model of the ultrasound data of the reference speaker, control parameters of the talking head, and adaptation ultrasound data of the source speaker. This approach is compared to a direct GMR regression between the source speaker data and the control parameters of the talking head. We show that C-GMR approach achieves the best compromise between amount of adaptation data and prediction quality. We also evaluate the generalization capability of the C-GMR approach and show that prior information of the reference speaker helps the model generalize to articulatory configurations of the source speaker unseen during the adaptation phase.Finally, we present preliminary results of a clinical application of augmented ultrasound imaging to a population of patients after partial glossectomy. We evaluate the use of visual feedback of the patient’s tongue in real time and the use of sequences recorded with a speech therapist to illustrate the targeted articulation. Classical speech therapy probes are led after each series of sessions. The first results show an improvement of the patients’ performance, especially for tongue placement
8

Agustinos, Anthony. "Navigation augmentée d'informations de fluorescence pour la chirurgie laparoscopique robot-assistée". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS033/document.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
La chirurgie laparoscopique reproduit fidèlement les principes de la chirurgie conventionnelle avec une agressioncorporelle minimale. Si cette chirurgie apparaît comme étant très avantageuse pour le patient, il s’agit d’une interventionchirurgicale difficile où la complexité du geste chirurgical est accrue, en comparaison avec la chirurgie conventionnelle.Cette complexité réside en partie dans la manipulation des instruments chirurgicaux et la visualisation dela scène chirurgicale (notamment le champ de visualisation restreint d’un endoscope classique). La prise de décisionsdu chirurgien pourrait être améliorée en identifiant des zones critiques ou d’intérêts non visibles dans la scènechirurgicale.Mes travaux de recherche visent à combiner la robotique, la vision par ordinateur et la fluorescence pour apporterune réponse à ces difficultés : l’imagerie de fluorescence fournira une information visuelle supplémentaire pour aiderle chirurgien dans la détermination des zones à opérer ou à éviter (par exemple, visualisation du canal cystique lorsd’une cholécystectomie). La robotique assurera la précision et l’efficience du geste du chirurgien ainsi qu’une visualisationet un suivi "plus intuitif" de la scène chirurgicale. L’association de ces deux technologies permettra de guideret sécuriser le geste chirurgical.Une première partie de ce travail a consisté en l’extraction d’informations visuelles dans les deux modalités d’imagerie(laparoscopie/fluorescence). Des méthodes de localisation 2D/3D en temps réel d’instruments chirurgicaux dansl’image laparoscopique et de cibles anatomiques dans l’image de fluorescence ont été conçues et développées.Une seconde partie a consisté en l’exploitation de l’information visuelle bimodale pour l’élaboration de lois de commandepour des robots porte-endoscope et porte-instrument. Des commandes par asservissement visuel d’un robotporte-endoscope pour suivre un ou plusieurs instruments dans l’image laparoscopique ou une cible d’intérêt dansl’image de fluorescence ont été mises en oeuvre.Dans l’objectif de pouvoir commander un robot porte-instrument, enfonction des informations visuelles fournies par le système d’imagerie, une méthode de calibrage basée sur l’exploitationde l’information 3D de la localisation d’instruments chirurgicaux a également été élaborée. Cet environnementmultimodal a été évalué quantitativement sur banc d’essai puis sur spécimens anatomiques.À terme ce travail pourra s’intégrer au sein d’architectures robotisées légères, non-rigidement liées, utilisant des robotsde comanipulation avec des commandes plus élaborées tel que le retour d’effort. Une telle "augmentation" descapacités de visualisation et d’action du chirurgien pourraient l’aider à optimiser la prise en charge de son patient
Laparoscopic surgery faithfully reproduce the principles of conventional surgery with minimal physical aggression.If this surgery appears to be very beneficial for the patient, it is a difficult surgery where the complexity of surgicalact is increased, compared with conventional surgery. This complexity is partly due to the manipulation of surgicalinstruments and viewing the surgical scene (including the restricted field of view of a conventional endoscope). Thedecisions of the surgeon could be improved by identifying critical or not visible areas of interest in the surgical scene.My research aimed to combine robotics, computer vision and fluorescence to provide an answer to these problems :fluorescence imaging provides additional visual information to assist the surgeon in determining areas to operate or tobe avoided (for example, visualization of the cystic duct during cholecystectomy). Robotics will provide the accuracyand efficiency of the surgeon’s gesture as well as a visualization and a "more intuitive" tracking of the surgical scene.The combination of these two technologies will help guide and secure the surgical gesture.A first part of this work consisted in extracting visual information in both imagingmodalities (laparoscopy/fluorescence).Localization methods for 2D/3D real-time of laparoscopic surgical instruments in the laparoscopic image and anatomicaltargets in the fluorescence image have been designed and developed. A second part consisted in the exploitationof the bimodal visual information for developing control laws for robotics endoscope holder and the instrument holder.Visual servoing controls of a robotic endoscope holder to track one or more instruments in laparoscopic image ora target of interest in the fluorescence image were implemented. In order to control a robotic instrument holder withthe visual information provided by the imaging system, a calibration method based on the use of 3D information of thelocalization of surgical instruments was also developed. This multimodal environment was evaluated quantitativelyon the test bench and on anatomical specimens.Ultimately this work will be integrated within lightweight robotic architectures, not rigidly linked, using comanipulationrobots with more sophisticated controls such as force feedback. Such an "increase" viewing capabilities andsurgeon’s action could help to optimize the management of the patient
9

Thomas, Vincent. "Modélisation 3D pour la réalité augmentée : une première expérimentation avec un téléphone intelligent". Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27904/27904.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Depuis leur introduction, les téléphones intelligents n’ont cessé d’évoluer. Ceux-ci intègrent généralement plusieurs composantes utiles (i.e. caméra numérique, récepteur GPS, accéléromètres, boussole numérique et plus récemment le gyroscope) pour des applications de Réalité Augmentée Mobile (RAM). Ce type d’application génère beaucoup d’intérêt auprès du grand public qui se voit offrir une nouvelle manière d’explorer son environnement. Afin d’obtenir une forte augmentation de la réalité en termes d’immersion de l’utilisateur et des interactions qui lui sont proposées, ces applications de RAM requièrent généralement un modèle 3D de l’environnement. Ce modèle 3D peut alors être exploité selon trois finalités différentes au sein de l’application de RAM qui sont : 1) gérer les occlusions; 2) aider au calcul de la pose (position/orientation) de la caméra de l’utilisateur; 3) supporter les interactions et l’augmentation de la réalité. Cependant, ces modèles 3D ne sont pas toujours disponibles à l’endroit où l’on souhaite augmenter la réalité ce qui nuit au déploiement des applications de RAM n’importe où et n’importe quand. Afin de surmonter cette contrainte, le présent projet de maîtrise a consisté à concevoir une chaîne de production de modèles 3D adaptée au contexte des applications de RAM dites fortement augmentées et facilement exploitable directement sur les lieux ciblés pour l’augmentation. La chaîne de production élaborée a été implantée sur la plateforme de l’iPhone 3G puis évaluée selon des critères d’exactitude, de rapidité, d’intuitivité et d’efficacité de l’augmentation résultante. Les résultats de cette évaluation ont permis de mettre en évidence la possibilité de modéliser en 3D un bâtiment simplement tout en atteignant une exactitude sous les 5 mètres en environ 3 minutes à l’aide d’un appareil de type téléphone intelligent.
Recently, a new genre of software applications has emerged allowing the general public to browse their immediate environment using their smartphone: Mobile Augmented Reality (MAR) applications. The growing popularity of this type of application is triggered by the fast evolution of smartphones. These ergonomic mobile platforms embed several pieces of equipment useful to deploy MAR (i.e. digital camera, GPS receiver, accelerometers, digital compass and now gyroscope). In order to achieve a strong augmentation of the reality in terms of user’s immersion and interactions, a 3D model of the real environment is generally required. The 3D model can be used for three different purposes in these MAR applications: 1) to manage the occlusions between real and virtual objects; 2) to provide accurate camera pose (position/orientation) calculation; 3) to support the augmentation and interactions. However, the availability of such 3D models is limited and therefore preventing MAR application to be used anywhere at anytime. In order to overcome such constraints, this proposed research thesis is aimed at devising a new approach adapted to the specific context of MAR applications and dedicated to the simple and fast production of 3D models. This approach was implemented on the iPhone 3G platform and evaluated according to precision, rapidity, simplicity and efficiency criteria. Results of the evaluation underlined the capacity of the proposed approach to provide, in about 3 minutes, a simple 3D model of a building using smartphone while achieving accuracy of 5 meters and higher.
10

Barberio, Manuel. "Real-time intraoperative quantitative assessment of gastrointestinal tract perfusion using hyperspectral imaging (HSI)". Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAJ120.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
La fistule anastomotique (FA) est une complication grave de la chirurgie. Une perfusion locale adéquate est fondamentale pour réduire le risque de FA. Cependant, les critères cliniques ne sont pas fiables pour évaluer la perfusion intestinale. À cet égard, l'angiographie par fluorescence (AF) a été explorée. Malgré des résultats prometteurs dans les essais cliniques, l'évaluation de l'AF est subjective, d'où l'incertitude quant à son efficacité. L'AF quantitative a déjà été introduite. Cependant, elle est limitée par la nécessité d'injecter un fluorophore. L'imagerie hyperspectrale (HSI) est une technique d'imagerie optique prometteuse couplant un spectroscope à une caméra photo, permettant une analyse quantitative des tissus en temps réel et sans contraste. L'utilisation intraopératoire de l'HSI est limitée par la présence d'images statiques. Nous avons développé la hyperspectral-based enhanced reality (HYPER), pour permettre une évaluation précise de la perfusion intraopératoire. Cette thèse décrit les étapes du développement et de la validation d'HYPER
Anastomotic leak (AL) is a severe complication in surgery. Adequate local perfusion is fundamental to promote anastomotic healing, reducing the risk of AL. However, clinical criteria are unreliable to evaluate bowel perfusion. Consequently, a tool allowing to objectively detect intestinal viability intraoperatively is desirable. In this regard, fluorescence angiography (FA) has been explored. In spite of promising results in clinical trials, FA assessment is subjective, hence the efficacy of FA is unclear. Quantitative FA has been previously introduced. However, it is limited by the need of injecting a fluorophore. Hyperspectral imaging (HSI) is a promising optical imaging technique coupling a spectroscope with a photo camera, allowing for a contrast-free, real-time, and quantitative tissue analysis. The intraoperative usability of HSI is limited by the presence of static images. We developed hyperspectral-based enhanced reality (HYPER), to allow for precise intraoperative perfusion assessment. This thesis describes the steps of the development and validation of HYPER

Libros sobre el tema "Imagerie augmentée":

1

Briscoe, Robert Eamon. Superimposed Mental Imagery. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198717881.003.0008.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Human beings have the capacity to ‘augment’ reality by superimposing mental imagery on the visually perceived scene, a capacity that is here referred to as make-perceive. In the first part of this chapter, the author shows that make-perceive enables us to solve certain problems and pursue certain projects more effectively than bottom-up perceiving or top-down visualization alone. The second part addresses the question of whether make-perceive may help to account for the phenomenal presence of occluded or otherwise hidden features of perceived objects. The author argues that phenomenal presence isn’t well explained by the hypothesis that hidden features are represented using projected mental images. In defending this position, he points to important phenomenological and functional differences between the way hidden object features are represented respectively in mental imagery and amodal completion.

Capítulos de libros sobre el tema "Imagerie augmentée":

1

Sales Barros, Ellton y Nelson Neto. "Classification Procedure for Motor Imagery EEG Data". En Augmented Cognition: Intelligent Technologies, 201–11. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91470-1_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Santhaseelan, Varun y Vijayan K. Asari. "Moving Object Detection and Tracking in Wide Area Motion Imagery". En Augmented Vision and Reality, 49–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/8612_2012_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Alam, Mohammad S. y Adel Sakla. "Automatic Target Recognition in Multispectral and Hyperspectral Imagery Via Joint Transform Correlation". En Augmented Vision and Reality, 179–206. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/8612_2012_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Xiaofei, Lele Xu, Li Yao y Xiaojie Zhao. "A Novel HCI System Based on Real-Time fMRI Using Motor Imagery Interaction". En Foundations of Augmented Cognition, 703–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39454-6_75.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Pérez-Zapata, A. F., A. F. Cardona-Escobar, J. A. Jaramillo-Garzón y Gloria M. Díaz. "Deep Convolutional Neural Networks and Power Spectral Density Features for Motor Imagery Classification of EEG Signals". En Augmented Cognition: Intelligent Technologies, 158–69. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91470-1_14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Loureiro, Sandra Maria Correia, Carolina Correia y João Guerreiro. "The Role of Mental Imagery as Driver to Purchase Intentions in a Virtual Supermarket". En Augmented Reality and Virtual Reality, 17–28. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68086-2_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Chen, Mei Lin, Lin Yao y Ning Jiang. "Music Imagery for Brain-Computer Interface Control". En Augmented Cognition. Enhancing Cognition and Behavior in Complex Human Environments, 293–300. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58625-0_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Dhindsa, Kiret, Dean Carcone y Suzanna Becker. "A Brain-Computer Interface Based on Abstract Visual and Auditory Imagery: Evidence for an Effect of Artistic Training". En Augmented Cognition. Enhancing Cognition and Behavior in Complex Human Environments, 313–32. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58625-0_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Qiu, Zhaoyang, Shugeng Chen, Brendan Z. Allison, Jie Jia, Xingyu Wang y Jing Jin. "Differences in Motor Imagery Activity Between the Paretic and Non-paretic Hands in Stroke Patients Using an EEG BCI". En Augmented Cognition. Enhancing Cognition and Behavior in Complex Human Environments, 378–88. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58625-0_28.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hubschman, J. P. "Réalité augmentée pour le segment postérieur". En Imagerie en ophtalmologie, 477–85. Elsevier, 2014. http://dx.doi.org/10.1016/b978-2-294-73702-2.00026-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Imagerie augmentée":

1

Ventura, Jonathan y Tobias Hollerer. "Outdoor mobile localization from panoramic imagery". En 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6092399.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ventura, Jonathan y Tobias Hollerer. "Outdoor mobile localization from panoramic imagery". En 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6162900.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gao, Zhenzhen, Luciano Nocera y Ulrich Neumann. "Fusing oblique imagery with augmented aerial LiDAR". En the 20th International Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2424321.2424381.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Nguyen, Lam, Francois Koenig y Kelly Sherbondy. "Augmented reality using ultra-wideband radar imagery". En SPIE Defense, Security, and Sensing, editado por Kenneth I. Ranney y Armin W. Doerry. SPIE, 2011. http://dx.doi.org/10.1117/12.883285.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Conover, Damon M., Brittany Beidleman, Ryan McAlinden y Christoph C. Borel-Donohue. "Visualizing UAS-collected imagery using augmented reality". En SPIE Defense + Security, editado por Timothy P. Hanratty y James Llinas. SPIE, 2017. http://dx.doi.org/10.1117/12.2262864.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Meixner, Philipp y Franz Leberl. "Augmented internet maps with property information from aerial imagery". En the 18th SIGSPATIAL International Conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1869790.1869848.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Korah, Thommen y Yun-Ta Tsai. "Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds". En 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6143897.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Korah, Thommen y Yun-Ta Tsai. "Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds". En 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6162912.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Nijholt, Anton. "Augmented Reality: Beyond Interaction". En 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002058.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
In 1997 Ronald T. Azuma introduced the following definition of Augmented Reality (AR): “Some researchers define AR in a way that requires the use of Head-Mounted Displays (HMDs). To avoid limiting AR to specific technologies, this survey defines AR as systems that have the following three characteristics: 1) Combines real and virtual, 2) Interactive in real-time, 3) Registered in 3-D.” Azuma also mentions that “AR might apply to all senses, not just sight.” [1] This definition has been leading in AR research until now. AR researchers focused on the various ways technology, in particular digital technology (computer-generated imagery, computer vision and world modelling, interaction technology, and AR display technology), could be developed to realize this AR view. The emphasis has been on addressing the sight sense when generating and aligning virtual content, our most dominant sense, although we can not survive without the others. Azuma and others mention the other senses and assume that this definition also covers other than computer-generated imagery, per-haps even other than computer generated and (spatial-temporal) generated and con-trolled virtual content. Nevertheless, the definition has some constituents that can be given various interpretations. This makes it workable, but it is useful to discuss how we should distinguish between real and virtual content, what is it that distinguishes real from virtual, or how virtual content can trigger changes in the real world (and the other way around), take into account that AR becomes part of ubiquitous computing. That is, rather than looking at AR from the point of view of particular professional, educational, or entertaining applications, we should look at AR from the point of view that it is ever-present, and embedded in ubiquitous computing (Ubicomp), and having its AR devices’ sensors and actuators communicate with the smart environments in which it is embedded.The focus in this paper is on ‘optical see-through’ (OSR) AR and ever-present AR. Ever-present AR will become possible with non-obtrusive AR glasses [2] or contact lenses [3,4]. Usually, interaction is looked upon from the point of view of what we see and hear. But we certainly are aware of touch experiences and exploring objects with active touch. We can also experience scents and flavors, passively but also actively, that is, consciously explore scents or tastes, become aware of them, and ask the environment, not necessarily explicitly since our preferences are known and our intentions can be predicted, to respond in an appropriate way to evoke or continue an interaction.Interaction in AR and with AR technology requires a new look at interaction. Are we interacting with the AR device, with the environment, or with the environment through the AR device? Part of what we perceive is real, part of what we perceive is superimposed on reality, and part of what we perceive is the interaction between real and virtual reality. How to interact with this mix of realities? Additionally, our HMD AR provides us with view changes because of position and head orientation or gaze changes. We interact with the device with, for example, speech and hand gestures, we interact with the environment with, for example, pose changes, and we interact with the virtual content with interaction modalities that are appropriate for that content: push a virtual block, open a virtual door, or have a conversation with a virtual hu-man that inhabits the AR world. In addition, we can think of interactions that be-come possible because technology allows us to get access and act upon sensor information that cannot be perceived with our natural perception receptors. In a ubiquitous computing environment, our AR device can provide us with a 360 degrees view of our environment, drones can feed us with information from above, infrared sensors know about people and events in the dark, our car receives visual information about not yet visible vehicles approaching an intersection [5], sound frequencies be-yond the human ear can be made accessible, smell sensors can enhance the human smell sense, et cetera.In this paper, we investigate the characteristics of interactions in AR and relate them to the regular human-computer interaction characteristics (interacting with tools) [6], interaction with multimedia [7] interaction through behavior [8], implicit interaction [9], embodied interaction [10], fake interaction [11], and interaction based on Gibson’s visual perception theory [12]. This will be done from the point of view of ever-present AR [13] with optical see-through wearable devices.References could not be included because of space limitations.
10

Chabot, Samuel, Jaimie Drozdal, Matthew Peveler, Yalun Zhou, Hui Su y Jonas Braasch. "A Collaborative, Immersive Language Learning Environment Using Augmented Panoramic Imagery". En 2020 6th International Conference of the Immersive Learning Research Network (iLRN). IEEE, 2020. http://dx.doi.org/10.23919/ilrn47897.2020.9155140.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Imagerie augmentée":

1

Mapping the Spatial Distribution of Poverty Using Satellite Imagery in the Philippines. Asian Development Bank, marzo de 2021. http://dx.doi.org/10.22617/spr210076-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The “leave no one behind” principle of the 2030 Agenda for Sustainable Development requires appropriate indicators for different segments of a country’s population. This entails detailed, granular data on population groups that extend beyond national trends and averages. The Asian Development Bank, in collaboration with the Philippine Statistics Authority and the World Data Lab, conducted a feasibility study to enhance the granularity, cost-effectiveness, and compilation of high-quality poverty statistics in the Philippines. This report documents the results of the study, which capitalized on satellite imagery, geospatial data, and powerful machine-learning algorithms to augment conventional data collection and sample survey techniques.
2

Mapping the Spatial Distribution of Poverty Using Satellite Imagery in the Philippines. Asian Development Bank, marzo de 2021. http://dx.doi.org/10.22617/tcs210076-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The “leave no one behind” principle of the 2030 Agenda for Sustainable Development requires appropriate indicators for different segments of a country’s population. This entails detailed, granular data on population groups that extend beyond national trends and averages. The Asian Development Bank, in collaboration with the Philippine Statistics Authority and the World Data Lab, conducted a feasibility study to enhance the granularity, cost-effectiveness, and compilation of high-quality poverty statistics in the Philippines. This report documents the results of the study, which capitalized on satellite imagery, geospatial data, and powerful machine-learning algorithms to augment data collection and sample survey techniques.conventional
3

A Guidebook on Mapping Poverty through Data Integration and Artificial Intelligence. Asian Development Bank, mayo de 2021. http://dx.doi.org/10.22617/spr210131-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The “leave no one behind” principle of the 2030 Agenda for Sustainable Development requires appropriate indicators to be estimated for different segments of a country’s population. The Asian Development Bank, in collaboration with the Philippine Statistics Authority, the National Statistical Office of Thailand, and the World Data Lab, conducted a feasibility study that aimed to enhance the granularity, cost-effectiveness, and compilation of high-quality poverty statistics in the Philippines and Thailand. This accompanying guide to the Key Indicators for Asia and the Pacific 2020 special supplement is based on the study, capitalizing on satellite imagery, geospatial data, and powerful machine-learning algorithms to augment conventional data collection and sample survey techniques.

Pasar a la bibliografía