Artículos de revistas sobre el tema "Imagerie augmentée"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Imagerie augmentée.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Imagerie augmentée".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Meier, Walter N., Michael L. Van Woert y Cheryl Bertoia. "Evaluation of operational SSM/I ice-concentration algorithms". Annals of Glaciology 33 (2001): 102–8. http://dx.doi.org/10.3189/172756401781818509.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
AbstractThe United States National Ice Center (NIC) provides weekly ice analyses of the Arctic and Antarctic using information from ice reconnaissance, ship reports and high-resolution satellite imagery. In cloud-covered areas and regions lacking imagery, the higher-resolution sources are augmented by ice concentrations derived from Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I) passive-microwave imagery. However, the SSM/I-derived ice concentrations are limited by low resolution and uncertainties in thin-ice regions. Ongoing research at NIC is attempting to improve the utility of these SSM/I products for operational sea-ice analyses. The refinements of operational algorithms may also aid future scientific studies. Here we discuss an evaluation of the standard operational ice-concentration algorithm, Cal/Val, with a possible alternative, a modified NASA Team algorithm. The modified algorithm compares favorably with Cal/Val and is a substantial improvement over the standard NASA Team algorithm in thin-ice regions that are of particular interest to operational activities.
2

Lin, Xin y Arthur Y. Hou. "Evaluation of Coincident Passive Microwave Rainfall Estimates Using TRMM PR and Ground Measurements as References". Journal of Applied Meteorology and Climatology 47, n.º 12 (1 de diciembre de 2008): 3170–87. http://dx.doi.org/10.1175/2008jamc1893.1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract This study compares instantaneous rainfall estimates provided by the current generation of retrieval algorithms for passive microwave sensors using retrievals from the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) and merged surface radar and gauge measurements over the continental United States as references. The goal is to quantitatively assess surface rain retrievals from cross-track scanning microwave humidity sounders relative to those from conically scanning microwave imagers. The passive microwave sensors included in the study are three operational sounders—the Advanced Microwave Sounding Unit-B (AMSU-B) instruments on the NOAA-15, -16, and -17 satellites—and five imagers: the TRMM Microwave Imager (TMI), the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) instrument on the Aqua satellite, and the Special Sensor Microwave Imager (SSM/I) instruments on the Defense Meteorological Satellite Program (DMSP) F-13, -14, and -15 satellites. The comparisons with PR data are based on “coincident” observations, defined as instantaneous retrievals (spatially averaged to 0.25° latitude and 0.25° longitude) within a 10-min interval collected over a 20-month period from January 2005 to August 2006. Statistics of departures of these coincident retrievals from reference measurements as given by the TRMM PR or ground radar and gauges are computed as a function of rain intensity over land and oceans. Results show that over land AMSU-B sounder rain retrievals are comparable in quality to those from conically scanning radiometers for instantaneous rain rates between 1.0 and 10.0 mm h−1. This result holds true for comparisons using either TRMM PR estimates over tropical land areas or merged ground radar/gauge measurements over the continental United States as the reference. Over tropical oceans, the standard deviation errors are comparable between imager and sounder retrievals for rain intensities above 5 mm h−1, below which the imagers are noticeably better than the sounders; systematic biases are small for both imagers and sounders. The results of this study suggest that in planning future satellite missions for global precipitation measurement, cross-track scanning microwave humidity sounders on operational satellites may be used to augment conically scanning microwave radiometers to provide improved temporal sampling over land without degradation in the quality of precipitation estimates.
3

Chancia, Robert, Jan van Aardt, Sarah Pethybridge, Daniel Cross y John Henderson. "Predicting Table Beet Root Yield with Multispectral UAS Imagery". Remote Sensing 13, n.º 11 (2 de junio de 2021): 2180. http://dx.doi.org/10.3390/rs13112180.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Timely and accurate monitoring has the potential to streamline crop management, harvest planning, and processing in the growing table beet industry of New York state. We used unmanned aerial system (UAS) combined with a multispectral imager to monitor table beet (Beta vulgaris ssp. vulgaris) canopies in New York during the 2018 and 2019 growing seasons. We assessed the optimal pairing of a reflectance band or vegetation index with canopy area to predict table beet yield components of small sample plots using leave-one-out cross-validation. The most promising models were for table beet root count and mass using imagery taken during emergence and canopy closure, respectively. We created augmented plots, composed of random combinations of the study plots, to further exploit the importance of early canopy growth area. We achieved a R2 = 0.70 and root mean squared error (RMSE) of 84 roots (~24%) for root count, using 2018 emergence imagery. The same model resulted in a RMSE of 127 roots (~35%) when tested on the unseen 2019 data. Harvested root mass was best modeled with canopy closing imagery, with a R2 = 0.89 and RMSE = 6700 kg/ha using 2018 data. We applied the model to the 2019 full-field imagery and found an average yield of 41,000 kg/ha (~40,000 kg/ha average for upstate New York). This study demonstrates the potential for table beet yield models using a combination of radiometric and canopy structure data obtained at early growth stages. Additional imagery of these early growth stages is vital to develop a robust and generalized model of table beet root yield that can handle imagery captured at slightly different growth stages between seasons.
4

Logaldo, Mara. "Augmented Bodies: Functional and Rhetorical Uses of Augmented Reality in Fashion". Pólemos 10, n.º 1 (1 de abril de 2016): 125–41. http://dx.doi.org/10.1515/pol-2016-0007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract Augmented Reality (AR) is increasingly changing our perception of the world. The spreading of Quick Response (QR), Radio Frequency (RFID) and AR tags has provided ways to enrich physical items with digital information. By a process of alignment the codes can be read by the cameras contained in handheld devices or special equipment and add computer-generated contents – including 3-D imagery – to real objects in real time. As a result, we feel we belong to a multi-layered dimension, to a mixed environment where the real and the virtual partly overlap. Fashion has been among the most responsive domains to this new technology. Applications of AR in the field have already been numerous and diverse: from Magic Mirrors in department stores to 3-D features in fashion magazines; from augmented fashion shows, where models are covered with tags or transformed into walking holograms, to advertisements consisting exclusively of more or less magnified QR codes. Bodies are thus at the same time augmented and encrypted, offered to the eye of the digital camera to be transfigured and turned into a secret language which, among other functions, can also have that of becoming a powerful tool to bypass censorship.
5

Mihara, Masahito, Hiroaki Fujimoto, Noriaki Hattori, Hironori Otomune, Yuta Kajiyama, Kuni Konaka, Yoshiyuki Watanabe et al. "Effect of Neurofeedback Facilitation on Poststroke Gait and Balance Recovery". Neurology 96, n.º 21 (20 de abril de 2021): e2587-e2598. http://dx.doi.org/10.1212/wnl.0000000000011989.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
ObjectiveTo test the hypothesis that supplementary motor area (SMA) facilitation with functional near-infrared spectroscopy–mediated neurofeedback (fNIRS-NFB) augments poststroke gait and balance recovery, we conducted a 2-center, double-blind, randomized controlled trial involving 54 Japanese patients using the 3-meter Timed Up and Go (TUG) test.MethodsPatients with subcortical stroke-induced mild to moderate gait disturbance more than 12 weeks from onset underwent 6 sessions of SMA neurofeedback facilitation during gait- and balance-related motor imagery using fNIRS-NFB. Participants were randomly allocated to intervention (28 patients) or placebo (sham: 26 patients). In the intervention group, the fNIRS signal contained participants' cortical activation information. The primary outcome was TUG improvement 4 weeks postintervention.ResultsThe intervention group showed greater improvement in the TUG test (12.84 ± 15.07 seconds, 95% confidence interval 7.00–18.68) than the sham group (5.51 ± 7.64 seconds, 95% confidence interval 2.43–8.60; group difference 7.33 seconds, 95% CI 0.83–13.83; p = 0.028), even after adjusting for covariates (group × time interaction; F1.23,61.69 = 4.50, p = 0.030, partial η2 = 0.083). Only the intervention group showed significantly increased imagery-related SMA activation and enhancement of resting-state connectivity between SMA and ventrolateral premotor area. Adverse effects associated with fNIRS-mediated neurofeedback intervention were absent.ConclusionSMA facilitation during motor imagery using fNIRS neurofeedback may augment poststroke gait and balance recovery by modulating the SMA and its related network.Classification of EvidenceThis study provides Class III evidence that for patients with gait disturbance from subcortical stroke, SMA neurofeedback facilitation improves TUG time (UMIN000010723 at UMIN-CTR; umin.ac.jp/english/).
6

Neumann, Ulrich, Suya You, Jinhui Hu, Bolan Jiang y Ismail Oner Sebe. "Visualizing Reality in an Augmented Virtual Environment". Presence: Teleoperators and Virtual Environments 13, n.º 2 (abril de 2004): 222–33. http://dx.doi.org/10.1162/1054746041382366.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
An Augmented Virtual Environment (AVE) fuses dynamic imagery with 3D models. An AVE provides a unique approach to visualizing spatial relationships and temporal events that occur in real-world environments. A geometric scene model provides a 3D substrate for the visualization of multiple image sequences gathered by fixed or moving image sensors. The resulting visualization is that of a world-in-miniature that depicts the corresponding real-world scene and dynamic activities. This paper describes the core elements of an AVE system, including static and dynamic model construction, sensor tracking, and image projection for 3D visualization.
7

Gomes, José Duarte Cardoso, Mauro Jorge Guerreiro Figueiredo, Lúcia da Graça Cruz Domingues Amante y Cristina Maria Cardoso Gomes. "Augmented Reality in Informal Learning Environments". International Journal of Creative Interfaces and Computer Graphics 7, n.º 2 (julio de 2016): 39–55. http://dx.doi.org/10.4018/ijcicg.2016070104.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Augmented Reality (AR) allows computer-generated imagery information to be overlaid onto a live real world environment in real-time. Technological advances in mobile computing devices (MCD) such as smartphones and tablets (internet access, built-in cameras and GPS) made a greater number of AR applications available. This paper presents the Augmented Reality Musical Gallery (ARMG) exhibition, enhanced by AR. ARMG focuses the twentieth century music history and it is aimed to students from the 2nd Cycle of basic education in Portuguese public schools. In this paper, we will introduce the AR technology and address topics as constructivism, art education, student motivation, and informal learning environments. We conclude by presenting the first part of the ongoing research conducted among a sample group of students contemplating the experiment in educational context.
8

Gawehn, Matthijs, Rafael Almar, Erwin W. J. Bergsma, Sierd de Vries y Stefan Aarninkhof. "Depth Inversion from Wave Frequencies in Temporally Augmented Satellite Video". Remote Sensing 14, n.º 8 (12 de abril de 2022): 1847. http://dx.doi.org/10.3390/rs14081847.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Optical satellite images of the nearshore water surface offer the possibility to invert water depths and thereby constitute the underlying bathymetry. Depth inversion techniques based on surface wave patterns can handle clear and turbid waters in a variety of global coastal environments. Common depth inversion algorithms require video from shore-based camera stations, UAVs or Xband-radars with a typical duration of minutes and at framerates of 1–2 fps to find relevant wave frequencies. These requirements are often not met by satellite imagery. In this paper, satellite imagery is augmented from a sequence of 12 images of Capbreton, France, collected over a period of ∼1.5 min at a framerate of 1/8 fps by the Pleiades satellite, to a pseudo-video with a framerate of 1 fps. For this purpose, a recently developed method is used, which considers spatial pathways of propagating waves for temporal video reconstruction. The augmented video is subsequently processed with a frequency-based depth inversion algorithm that works largely unsupervised and is openly available. The resulting depth estimates approximate ground truth with an overall depth bias of −0.9 m and an interquartile range of depth errors of 5.1 m. The acquired accuracy is sufficiently high to correctly predict wave heights over the shoreface with a numerical wave model and to find hotspots where wave refraction leads to focusing of wave energy that has potential implications for coastal hazard assessments. A more detailed depth inversion analysis of the nearshore region furthermore demonstrates the possibility to detect sandbars. The combination of image augmentation with a frequency-based depth inversion method shows potential for broad application to temporally sparse satellite imagery and thereby aids in the effort towards globally available coastal bathymetry data.
9

Kuny, S., H. Hammer y A. Thiele. "CNN BASED VEHICLE TRACK DETECTION IN COHERENT SAR IMAGERY: AN ANALYSIS OF DATA AUGMENTATION". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (30 de mayo de 2022): 93–98. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-93-2022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract. The coherence image as a product of a coherent SAR image pair can expose even subtle changes in the surface of a scene, such as vehicle tracks. For machine learning models, the large amount of required training data often is a crucial issue. A general solution for this is data augmentation. Standard techniques, however, were predominantly developed for optical imagery, thus do not account for SAR specific characteristics and thus are only partially applicable to SAR imagery. In this paper several data augmentation techniques are investigated for their performance impact regarding a CNN based vehicle track detection with the aim of generating an optimized data set. Quantitative results are shown on the performance comparison. Furthermore, the performance of the fully-augmented data set is put into relation to the training with a large non-augmented data set.
10

Bernardes, Sergio, Margueritte Madden, Ashurst Walker, Andrew Knight, Nicholas Neel, Akshay Mendki, Dhaval Bhanderi, Andrew Guest, Shannon Healy y Thomas Jordan. "Emerging Geospatial Technologies in Environmental Research, Education, and Outreach". Geosfera Indonesia 5, n.º 3 (30 de diciembre de 2020): 352. http://dx.doi.org/10.19184/geosi.v5i3.20719.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Drawing on the historical importance of visual interpretation for image understanding and knowledge discovery, emerging technologies in geovisualization are incorporated into research, education and outreach at the Center for Geospatial Research (CGR) in the Department of Geography at the University of Georgia (UGA), USA. This study aimed to develop the 3D Immersion and Geovisualization (3DIG) system consisting of uncrewed aerial systems (UAS) for data acquisition, augmented and virtual reality headsets and mobile devices, an augmented reality digital sandbox, and a video wall. We were working together integrated data products from the UAS imagery, including digital image mosaics and 3D models, and readily available gaming engine software to create augmented and virtual reality immersive visualizations. The use of 3DIG in research is demonstrated in a case study documenting the seasonal growth of vegetables in small gardens with a time series of 3D crop models generated from UAS imagery and Structure from Motion photogrammetry. Demonstrations of 3DIG in geography and geology courses, as well as public events, also indicate the benefits of emerging geospatial technologies for creating active learning environments and fostering participatory community engagement. Keywords: Environmental Education; Geovisualization; Augmented Reality; Virtual Reality; UAS, Photogrammetry Copyright (c) 2020 Geosfera Indonesia Journal and Department of Geography Education, University of Jember This work is licensed under a Creative Commons Attribution-Share A like 4.0 International License
11

Shih, Naai-Jung y Yu-Chen Wu. "AR-Based 3D Virtual Reconstruction of Brick Details". Remote Sensing 14, n.º 3 (5 de febrero de 2022): 748. http://dx.doi.org/10.3390/rs14030748.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Building heritage contributes to the historical context and industrial history of a city. Brick warehouses, which comprise a systematic interface between components, demand an interactive manipulation of inspected parts to interpret their construction complexity. The documentation of brick details in augmented reality (AR) can be challenging when the relative location needs to be defined in 3D. This study aimed to compare brick details in AR, and to reconstruct the interacted result in the correct relative location. We applied photogrammetry modeling and smartphone AR for the first and secondary 3D reconstruction of brick warehouse details and compared the results. In total, 146 3D AR database models were created. The AR-based virtual reconstruction process applied multiple imagery resources from video conferencing and broadcast of models on the Augment® platform through a smartphone. Tests verified the virtual reconstruction in AR, and concluded the deviation between the final secondary reconstructed 3D model and the first reconstructed model had a standard deviation of less than 1 cm. AR enabled the study and documentation of cross-referenced results in comparison with the simplified reconstruction process, with structural detail and visual detail suitable for 3D color prints.
12

Fang, Tao, Zuoting Song, Gege Zhan, Xueze Zhang, Wei Mu, Pengchao Wang, Lihua Zhang y Xiaoyang Kang. "Decoding motor imagery tasks using ESI and hybrid feature CNN". Journal of Neural Engineering 19, n.º 1 (1 de febrero de 2022): 016022. http://dx.doi.org/10.1088/1741-2552/ac4ed0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract Objective. Brain–computer interface (BCI) based on motor imaging electroencephalogram (MI-EEG) can be useful in a natural interaction system. In this paper, a new framework is proposed to solve the MI-EEG binary classification problem. Approach. Electrophysiological source imaging (ESI) technology is used to solve the influence of volume conduction effect and improve spatial resolution. Continuous wavelet transform and best time of interest (TOI) are combined to extract the optimal discriminant spatial-frequency features. Finally, a convolutional neural network with seven convolution layers is used to classify the features. In addition, we also validated several new data augment methods to solve the problem of small data sets and reduce network over-fitting. Main results. The model achieved an average classification accuracy of 93.2% and 95.4% on the BCI Competition III IVa and high-gamma data sets, which is better than most of the published advanced algorithms. By selecting the best TOI for each subject, the classification accuracy rate increased by about 2%. The effects of four data augment methods on the classification results were also verified. Among them, the noise addition and overlap methods are better than the other two, and the classification accuracy is improved by at least 4%. On the contrary, the rotation and flip data augment methods reduced the classification accuracy. Significance. Decoding MI tasks can benefit from combing the ESI technology and the data augment technology, which is used to solve the problem of low spatial resolution and small samples of EEG signals, respectively. Based on the results, the model proposed has higher accuracy and application potential in the task of MI-EEG binary classification.
13

Bergsma, Erwin W. J., Rafael Almar y Philippe Maisongrande. "Radon-Augmented Sentinel-2 Satellite Imagery to Derive Wave-Patterns and Regional Bathymetry". Remote Sensing 11, n.º 16 (16 de agosto de 2019): 1918. http://dx.doi.org/10.3390/rs11161918.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Climatological changes occur globally but have local impacts. Increased storminess, sea level rise and more powerful waves are expected to batter the coastal zone more often and more intense. To understand climate change impacts, regional bathymetry information is paramount. A major issue is that the bathymetries are often non-existent or if they do exist, outdated. This sparsity can be overcome by space-borne satellite techniques to derive bathymetry. Sentinel-2 optical imagery is collected continuously and has a revisit-time around a few days depending on the orbital-position around the world. In this work, Sentinel-2 imagery derived wave patterns are extracted using a localized radon transform. A discrete fast-Fourier (DFT) procedure per direction in Radon space (sinogram) is then applied to derive wave spectra. Sentinel-2 time-lag between detector bands is employed to compute the spectral wave-phase shift and depth using the gravity wave linear dispersion. With this novel technique, regional bathymetries are derived at the test-site of Capbreton, France with an root mean squared (RMS)-error of 2.58 m and a correlation coefficient of 0.82 when compared to the survey for depths until 30 m. With the proposed method, the 10 m Sentinel-2 resolution is sufficient to adequately estimate bathymetries for a wave period of 6.5 s or greater. For shorter periods, the pixel resolution does not allow to detect a stable celerity. In addition to the wave-signature enhancement, the capability of the Radon Transform to augment Sentinel-2 20 m resolution imagery to 10 m is demonstrated, increasing the number of suitable bands for the depth inversion.
14

Giannopulu, I., G. Brotto, T. J. Lee, A. Frangos y D. To. "Synchronised neural signature of creative mental imagery in reality and augmented reality". Heliyon 8, n.º 3 (marzo de 2022): e09017. http://dx.doi.org/10.1016/j.heliyon.2022.e09017.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gandolfi, Enrico, Richard E. Ferdig, David Carlyn, Annette Kratcoski, Jason Dunfee, David Hassler, James Blank, Chris Lenart y Robert Clements. "GLARE". International Journal of Virtual and Augmented Reality 5, n.º 1 (enero de 2021): 1–19. http://dx.doi.org/10.4018/ijvar.290043.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Augmented reality (AR) shows potential for positively impacting learning about cultural heritage. However, current AR tools do not allow users (e.g., teachers, educators) to easily create their own experiences and lessons; there is a significant skill-barrier for producing augmented content. In order to address this problem, we created GLARE (GeoLocated Augmented Reality Editor), an open source and extensible AR platform hosted on Github. Utilizing overlaid imagery on live video feed using a hotspot and walking path design. The platform is designed to allow users to create tours by simply adding a media list and associated GPS coordinates. The underlying software architecture use basic HTML, scripting and the ThreeJS and Google application programming interfaces. This article describes the framework, and then presents a case study of the system being used to create an augmented reality tour based on the events of May 4th 1970 at Kent State University.
16

Conner, Thomas. "Pepper's Ghost and the augmented reality of modernity". Journal of Science & Popular Culture 3, n.º 1 (1 de marzo de 2020): 57–79. http://dx.doi.org/10.1386/jspc_00012_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract The emergence of augmented-reality (AR) displays has inspired scholarship examining the social effects and communicative impacts of these visual technologies. But the broader concept of reality augmentation ‐ of overlaying information onto everyday experience ‐ has been likened to the imposition of social discourses and ideologies. This article examines the nineteenth-century stage illusion Pepper's Ghost as an early AR media system in service to the particular ideological mission of the Royal Polytechnic Institution in London. Despite its spectral imagery and historical parallels to spiritualism, Pepper's Ghost was instrumental in servicing the Polytechnic's goals of promoting rational modernity and scientific progress, which it accomplished by mediating the epistemic divide between superstition and science and blending Polytechnic ideology with the phenomenological experience of the screened image. In this article, I make visible the ideological function of two apparatuses: the Pepper's Ghost illusion as a media system, and the social institution of the Polytechnic itself. In the end, I situate the current revival of Pepper's Ghost stagings as a twenty-first-century phenomenon amid new restagings of Pepper's Ghost as performing 'holograms'.
17

Bikos, Dan, John F. Weaver y Jeff Braun. "The Role of GOES Satellite Imagery in Tracking Low-Level Moisture". Weather and Forecasting 21, n.º 2 (1 de abril de 2006): 232–41. http://dx.doi.org/10.1175/waf911.1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract This note provides examples of how geostationary satellite data can be applied to augment other data sources in tracking warm, moist air masses as they move northward from the Gulf of Mexico. These so-called returning air masses are often a key ingredient in bringing about severe weather outbreaks in the central and southeastern United States. The newer NOAA–GOES imagery provides high spatial and temporal resolution. Together, surface observations, upper-air soundings, and high-resolution satellite imagery provide a comprehensive picture of the returning moist air mass.
18

Yoo, Jungmin. "Effects of Imagery-evoking Augmented Reality on Experiential Value in Mobile Shopping Apps". E-Business Studies 19, n.º 6 (31 de diciembre de 2018): 159–73. http://dx.doi.org/10.20462/tebs.2018.12.19.6.159.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chakravarthula, Praneeth, David Dunn, Kaan Aksit y Henry Fuchs. "FocusAR: Auto-focus Augmented Reality Eyeglasses for both Real World and Virtual Imagery". IEEE Transactions on Visualization and Computer Graphics 24, n.º 11 (noviembre de 2018): 2906–16. http://dx.doi.org/10.1109/tvcg.2018.2868532.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Tsai, Fuan y Ming‐Jhong Chou. "Texture augmented analysis of high resolution satellite imagery in detecting invasive plant species". Journal of the Chinese Institute of Engineers 29, n.º 4 (junio de 2006): 581–92. http://dx.doi.org/10.1080/02533839.2006.9671155.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

D'Angiulli, Amedeo y Adam Reeves. "Probing Vividness Increment through Imagery Interruption". Imagination, Cognition and Personality 23, n.º 1 (septiembre de 2003): 63–78. http://dx.doi.org/10.2190/fd86-rwdh-flx2-ulrg.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The purpose of the present study was to trace how visual experience changes over time during mental image generation, by using an interruption paradigm. In one experiment, participants were asked to read the verbal descriptions of eight common objects and imagine these objects. Changes in the quality of the images evoked by the eight stimuli were probed by interrupting the visual mental image generation process at various times from 0 to 1.7 s, and asking participants to rate the vividness of their image at the time of interruption. We found that vividness increased as the time allowed for image generation was augmented. This relationship was consistently detected in half of the participants and for all stimuli. The present findings support the implicit assumption of some current imagery models positing that mental images “improve” over time, and reject the alternative that images are generated in full detail before becoming accessible to consciousness. However, the “incremental” view is unsatisfactory for imagery models which make no (or not enough) room for individual differences.
22

Hamzah, Muhammad Luthfi, Ambiyar Ambiyar, Fahmi Rizal, Wakhinudin Simatupang, Dedy Irfan y Refdinal Refdinal. "Development of Augmented Reality Application for Learning Computer Network Device". International Journal of Interactive Mobile Technologies (iJIM) 15, n.º 12 (18 de junio de 2021): 47. http://dx.doi.org/10.3991/ijim.v15i12.21993.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
<span lang="EN-US">Applied augmented reality works by detecting imageries or pictures, normally called markers, by using smartphone camera that detects these preplaced markers. Augmented reality has seen wide application in various fields, one of them is education. In the field of education, augmented reality is utilized to make learning process more engaging and attractive. Starting from the problem that learning computer networks on introduce to network device which is currently still delivered conventionally. So, this research makes a solution to this problem by developing learning media using augmented reality (AR) technology, which is a technology that combines two-dimensional or three-dimensional virtual objects into a real environment and then projects these virtual objects in real time. The purpose of this research is to build AR-based applications in learning computer network devices in order to increase understanding, generate motivation and student interest. The methodology used in this research consist of Envisioning Phase(Problem Identification, Planning Phase(Planning), Developing Phase(Design), Stabilizing Phase(Testing), Deploying Phase(Implementation). This study uses 31 students as sample and the data was analyzed using the SUS(System Usability Scale). The result show </span><span lang="EN-US">evaluates the usability using SUS of 31 respondents and it can be concluded that this AR application can be accepted by these students in its use with </span><span lang="EN-US">SUS Score obtained was 78.5</span>
23

Ferguson, M. C., R. P. Angliss, A. Kennedy, B. Lynch, A. Willoughby, V. Helker, A. A. Brower y J. T. Clarke. "Performance of manned and unmanned aerial surveys to collect visual data and imagery for estimating arctic cetacean density and associated uncertainty". Journal of Unmanned Vehicle Systems 6, n.º 3 (1 de septiembre de 2018): 128–54. http://dx.doi.org/10.1139/juvs-2018-0002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Manned aerial surveys have been used successfully for decades to collect data to infer cetacean distribution, density (number of whales/km2), and abundance. Unmanned aircraft systems (UAS) have potential to augment or replace some manned aerial surveys for cetaceans. We conducted a three-way comparison among visual observations made by marine mammal observers aboard a Turbo Commander aircraft; imagery autonomously collected by a Nikon D810 camera system mounted to a belly port on the Turbo Commander; and imagery collected by a similar camera system on a remotely controlled ScanEagle® UAS operated by the US Navy. Bowhead whale density estimates derived from the marine mammal observer data were higher than those from the Turbo Commander imagery; comparisons to the UAS imagery depended on survey sector and analytical method. Beluga density estimates derived from either dataset collected aboard the Turbo Commander were higher than estimates derived from the UAS imagery. Uncertainties in density estimates derived from the marine mammal observer data were lower than estimates derived from either imagery dataset due to the small sample sizes in the imagery. The visual line-transect aerial survey conducted by marine mammal observers aboard the Turbo Commander was 68.5% of the cost of the photo strip-transect survey aboard the same aircraft and 9.4% of the cost of the UAS survey.
24

Wahbeh, W., M. Ammann, S. Nebiker, M. van Eggermond y A. Erath. "IMAGE-BASED REALITY-CAPTURING AND 3D MODELLING FOR THE CREATION OF VR CYCLING SIMULATIONS". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-4-2021 (17 de junio de 2021): 225–32. http://dx.doi.org/10.5194/isprs-annals-v-4-2021-225-2021.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract. With this paper, we present a novel approach for efficiently creating reality-based, high-fidelity urban 3D models for interactive VR cycling simulations. The foundation of these 3D models is accurately georeferenced street-level imagery, which can be captured using vehicle-based or portable mapping platforms. Depending on the desired type of urban model, the street-level imagery is either used for semi-automatically texturing an existing city model or for automatically creating textured 3D meshes from multi-view reconstructions using commercial off-the-shelf software. The resulting textured urban 3D model is then integrated with a real-time traffic simulation solution to create a VR framework based on the Unity game engine. Subsequently, the resulting urban scenes and different planning scenarios can be explored on a physical cycling simulator using a VR helmet or viewed as a 360-degree or conventional video. In addition, the VR environment can be used for augmented reality applications, e.g., mobile augmented reality maps. We apply this framework to a case study in the city of Berne to illustrate design variants of new cycling infrastructure at a major traffic junction to collect feedback from practitioners about the potential for practical applications in planning processes.
25

Katare, Prateek, Navchetan Awasthi, Aravind Venukumar y Sai Siva Gorthi. "Low-Cost, Continuous Motion Imaging, Computationally Augmented Whole Slide Imager for Digital Pathology". IEEE Journal of Selected Topics in Quantum Electronics 27, n.º 4 (julio de 2021): 1–7. http://dx.doi.org/10.1109/jstqe.2021.3067389.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Park, Minjung y Jungmin Yoo. "Effects of perceived interactivity of augmented reality on consumer responses: A mental imagery perspective". Journal of Retailing and Consumer Services 52 (enero de 2020): 101912. http://dx.doi.org/10.1016/j.jretconser.2019.101912.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Trojan, Jörg, Martin Diers, Xaver Fuchs, Felix Bach, Robin Bekrater-Bodmann, Jens Foell, Sandra Kamping, Mariela Rance, Heiko Maaß y Herta Flor. "An augmented reality home-training system based on the mirror training and imagery approach". Behavior Research Methods 46, n.º 3 (13 de diciembre de 2013): 634–40. http://dx.doi.org/10.3758/s13428-013-0412-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Park, Cheolsoo, Clive Cheong Took y Danilo P. Mandic. "Augmented Complex Common Spatial Patterns for Classification of Noncircular EEG From Motor Imagery Tasks". IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, n.º 1 (enero de 2014): 1–10. http://dx.doi.org/10.1109/tnsre.2013.2294903.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Ruffner, John W. y Jim E. Fulbrook. "Usability Considerations for a Tower Controller Near-Eye Augmented Reality Display". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, n.º 2 (octubre de 2007): 117–21. http://dx.doi.org/10.1177/154193120705100214.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Although the primary means by which air traffic controllers in airport towers obtain information is by direct head-up, out-the-window (OTW) viewing, they spend a considerable amount of head-down time looking at flight strips, panel-mounted displays, and other information sources in the tower. The U.S. Air Force sponsored the development of a prototype Near-Eye Augmented Reality (NE/AR) display to enhance tower controller performance and situation awareness. The display overlays situation-relevant text (e.g., aircraft call sign) and graphic images (e.g., runway outline), on real-time, head-tracked video imagery. Throughout the development process, we performed usability engineering and assessments using 1) user/task observation, 2) physical mockups, 3) interactive reviews, and 4) early prototype evaluation. In this paper we describe our usability efforts, and discuss usability considerations and human performance issues affecting the functionality and acceptance of a tower controller NE/AR display.
30

Wei, Chih-Chiang, Gene Jiing-Yun You, Li Chen, Chien-Chang Chou y Jinsheng Roan. "Diagnosing Rain Occurrences Using Passive Microwave Imagery: A Comparative Study on Probabilistic Graphical Models and “Black Box” Models". Journal of Atmospheric and Oceanic Technology 32, n.º 10 (octubre de 2015): 1729–44. http://dx.doi.org/10.1175/jtech-d-14-00164.1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
AbstractRainfall is a fundamental process in the hydrologic cycle. This study investigated the cause–effect relationship in which precipitation at lower frequencies affects the amount of emitted radiation and at higher frequencies affects the amount of backscattered terrestrial radiation. Because the advantage of a probabilistic graphical model is its graphical representation, which allows easy causality interpretation using the arc directions, two Bayesian networks (BNs) were used, namely, a naïve Bayes classifier and a tree-augmented naïve Bayes model. To empirically evaluate and compare BN-based models, “black box”–based models, including nearest-neighbor searches and artificial neural network (ANN)-based multilayer perceptron and logistic regression, were used as benchmarks. For the two study regions—namely, the Tanshui River basin in northern Taiwan and Chianan Plain in southern Taiwan—rain occurrences during typhoon seasons were examined using passive microwave imagery recorded using the Special Sensor Microwave Imager/Sounder. The results show that although black box models exhibit excellent prediction ability, interpretation of their behavior is unsatisfactory. By contrast, probabilistic graphical models can explicitly reveal the causal relationship between brightness temperatures and nonrain/rain discrimination. For the Tanshui River basin, 19.35-, 22.23-, 37.0-, and 85.5-GHz vertically polarized brightness temperatures were found to diagnose rain occurrences. For the Chianan Plain, a more sensitive indicator of rain-scattering signals was obtained using 85-GHz measurements. The results demonstrate the potential use of BNs in identifying rain occurrences in regions with land features comprising various absorbing and scattering materials.
31

Väljamäe, Aleksander, Pontus Larsson, Daniel Västfjäll y Mendel Kleiner. "Sound Representing Self-Motion in Virtual Environments Enhances Linear Vection". Presence: Teleoperators and Virtual Environments 17, n.º 1 (1 de febrero de 2008): 43–56. http://dx.doi.org/10.1162/pres.17.1.43.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Sound is an important, but often neglected, component for creating a self-motion illusion (vection) in Virtual Reality applications, for example, motion simulators. Apart from auditory motion cues, sound can provide contextual information representing self-motion in a virtual environment. In two experiments we investigated the benefits of hearing an engine sound when presenting auditory (Experiment 1) or auditory-vibrotactile (Experiment 2) virtual environments inducing linear vection. The addition of the engine sound to the auditory scene significantly enhanced subjective ratings of vection intensity in Experiment 1 and vection onset times but not subjective ratings in Experiment 2. Further analysis using individual imagery vividness scores showed that this disparity between vection measures was created by participants with higher kinesthetic imagery. On the other hand, for participants with lower kinesthetic imagery scores, the engine sound enhanced vection sensation in both experiments. A high correlation with participants' kinesthetic imagery vividness scores suggests the influence of a first person perspective in the perception of the engine sound. We hypothesize that self-motion sounds (e.g., the sound of footsteps, engine sound) represent a specific type of acoustic body-centered feedback in virtual environments. Therefore, the results may contribute to a better understanding of the role of self-representation sounds (sonic self-avatars), in virtual and augmented environments.
32

Marques, Bruno, Jacqueline McIntosh y Hannah Carson. "Whispering tales: using augmented reality to enhance cultural landscapes and Indigenous values". AlterNative: An International Journal of Indigenous Peoples 15, n.º 3 (30 de junio de 2019): 193–204. http://dx.doi.org/10.1177/1177180119860266.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Increasingly, our built and natural environments are becoming hybrids of real and digital entities where objects, buildings and landscapes are linked online in websites, blogs and texts. In the case of Aotearoa New Zealand, modern lifestyles have put Māori Indigenous oral narratives at risk of being lost in a world dominated by text and digital elements. Intangible values, transmitted orally from generation to generation, provide a sense of identity and community to Indigenous Māori as they relate and experience the land based on cultural, spiritual, emotion, physical and social values. Retaining the storytelling environment through the use of augmented reality, this article extends the biophysical attributes of landscape through embedded imagery and auditory information. By engaging with Ngāti Kahungunu ki Wairarapa, a design approach has been developed to illustrate narratives through different media, in a way that encourages a deeper and broader bicultural engagement with landscape.
33

Wang, Zhuozheng, Zhuo Ma, Xiuwen Du, Yingjie Dong y Wei Liu. "Research on the Key Technologies of Motor Imagery EEG Signal Based on Deep Learning". Journal of Autonomous Intelligence 2, n.º 2 (24 de febrero de 2020): 73. http://dx.doi.org/10.32629/jai.v2i2.60.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Brain-computer interface (BCI) is an emerging area of research that establishes a connection between the brain and external devices in a completely new way. It provides a new idea about the rehabilitation of brain diseases, human-computer interaction and augmented reality. One of the main problems of implementing BCI is to recognize and classify the motor imagery Electroencephalography(EEG) signals effectively. This paper takes the characteristic data of motor imagery of EEG as the research object to conduct the research of multi-classification method. In this study, we use the Emotiv helmet with 16 biomedical sensors to obtain EEG signal, adopt the fast independent component analysis and the fast Fourier transform to realize signal preprocessing and select the common spatial pattern algorithm to extract the features of the motor imagery EEG signal. In order to improve the accuracy of recognition of EEG signal, a new deep learning network is designed for multi-channel self-acquired EEG data set which is named as min-VGG-LSTMnet in this paper. This network combines Long Short-Term Memory Network with convolutional neural network VGG and achieves the four-classification task of the left-hand, right-hand, left-foot and right-foot lifting movements based on motor imagery. The results show that the accuracy of the proposed classification method is at least 8.18% higher than other mainstream deep-learning methods.
34

Khan, Dawar, Inam ur Rehman, Sehat Ullah, Waheed Ahmad, Zhanglin Cheng, Gul Jabeen y Hirokazu Kato. "A Low-Cost Interactive Writing Board for Primary Education using Distinct Augmented Reality Markers". Sustainability 11, n.º 20 (16 de octubre de 2019): 5720. http://dx.doi.org/10.3390/su11205720.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Educational institutions demand cost-effective and simple-to-use augmented reality systems. ARToolKit, an open-source computer tracking library for the creation of augmented reality applications that overlay virtual imagery on the real world, is such a system. It uses a simple camera and black-and-white markers printed on paper. However, due to inter-marker confusion, if the marker distinctions are not ensured, the markers are often miss-recognized. This paper presents an ARToolKit-based Interactive Writing Board (IWB) with a simple mechanism for designing confusion-free marker libraries. The board is used for teaching single characters of Arabic/Urdu to primary level students. It uses a simple ARToolKit marker for the recognition of each character. After marker recognition, the IWB displays the corresponding image, helping students with character understanding and pronunciation. Experimental results reveal that the system improves students’ motivation and learning skills.
35

Hamson-Utley, J. Jordan, Scott Martin y Jason Walters. "Athletic Trainers' and Physical Therapists' Perceptions of the Effectiveness of Psychological Skills Within Sport Injury Rehabilitation Programs". Journal of Athletic Training 43, n.º 3 (1 de mayo de 2008): 258–64. http://dx.doi.org/10.4085/1062-6050-43.3.258.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract Context: Psychological skills are alleged to augment sport-injury rehabilitation; however, implementation of mental imagery within rehabilitation programs is limited. Objective: To examine attitudes of athletic trainers (ATs) and physical therapists (PTs) on the effectiveness of mental imagery, goal setting, and positive self-talk to improve rehabilitation adherence and recovery speed of injured athletes. Design: The ATs and PTs were contacted via electronic or physical mailings to complete a single administration survey that measured their beliefs about the effectiveness of psychological skills for increasing adherence and recovery speed of injured athletes undergoing rehabilitation. Setting: Professional member databases of the National Athletic Trainers' Association and the American Physical Therapy Association. Patients or Other Participants: Of the 1000 ATs and 1000 PTs who were selected randomly, 309 ATs (age = 34.18 ± 8.32 years, years in profession = 10.67 ± 7.34) and 356 PTs (age = 38.58 ± 7.51 years, years in profession = 13.18 ± 6.17) responded. Main Outcome Measure(s): The Attitudes About Imagery (AAI) survey measures attitudes about psychological skills for enhancing adherence and recovery speed of injured athletes. The AAI includes demographic questions and 15 items on a 7-point Likert scale measuring attitudes about the effectiveness of mental imagery, self-talk, goal setting, and pain control on rehabilitation adherence and recovery speed of injured athletes. Test-retest reliability ranged from .60 to .84 and Cronbach αs ranged from .65 to .90. We calculated 1-way analyses of variance to determine whether differences existed in attitudes as a result of the professionals' education, training experience, and interest. Results: Mean differences were found on attitudes about effectiveness of psychological skills for those who reported formal training and those who reported interest in receiving formal training (P &lt; .05). In addition, ATs held more positive attitudes than PTs on 9 of 15 AAI items (P &lt; .05). Conclusions: Overall, ATs and PTs held positive attitudes on the effectiveness of psychological skills to augment the rehabilitation process. Clinical implications regarding the use of mental skills are discussed.
36

Tutzauer, P. y N. Haala. "PROCESSING OF CRAWLED URBAN IMAGERY FOR BUILDING USE CLASSIFICATION". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (31 de mayo de 2017): 143–49. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-143-2017.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Recent years have shown a shift from pure geometric 3D city models to data with semantics. This is induced by new applications (e.g. Virtual/Augmented Reality) and also a requirement for concepts like Smart Cities. However, essential urban semantic data like building use categories is often not available. We present a first step in bridging this gap by proposing a pipeline to use crawled urban imagery and link it with ground truth cadastral data as an input for automatic building use classification. We aim to extract this city-relevant semantic information automatically from Street View (SV) imagery. Convolutional Neural Networks (CNNs) proved to be extremely successful for image interpretation, however, require a huge amount of training data. Main contribution of the paper is the automatic provision of such training datasets by linking semantic information as already available from databases provided from national mapping agencies or city administrations to the corresponding façade images extracted from SV. Finally, we present first investigations with a CNN and an alternative classifier as a proof of concept.
37

Falkowski, Michael J., Michael A. Wulder, Joanne C. White y Mark D. Gillis. "Supporting large-area, sample-based forest inventories with very high spatial resolution satellite imagery". Progress in Physical Geography: Earth and Environment 33, n.º 3 (junio de 2009): 403–23. http://dx.doi.org/10.1177/0309133309342643.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Information needs associated with forest management and reporting requires data with a steadily increasing level of detail and temporal frequency. Remote sensing satellites commonly used for forest monitoring (eg, Landsat, SPOT) typically collect imagery with sufficient temporal frequency, but lack the requisite spatial and categorical detail for some forest inventory information needs. Aerial photography remains a principal data source for forest inventory; however, information extraction is primarily accomplished through manual processes. The spatial, categorical, and temporal information requirements of large-area forest inventories can be met through sample-based data collection. Opportunities exist for very high spatial resolution (VHSR; ie, <1 m) remotely sensed imagery to augment traditional data sources for large-area, sample-based forest inventories, especially for inventory update. In this paper, we synthesize the state-of-the-art in the use of VHSR remotely sensed imagery for forest inventory and monitoring. Based upon this review, we develop a framework for updating a sample-based, large-area forest inventory that incorporates VHSR imagery. Using the information needs of the Canadian National Forest Inventory (NFI) for context, we demonstrate the potential capabilities of VHSR imagery in four phases of the forest inventory update process: stand delineation, automated attribution, manual interpretation, and indirect attribute modelling. Although designed to support the information needs of the Canadian NFI, the framework presented herein could be adapted to support other sample-based, large-area forest monitoring initiatives.
38

Hasler, O., S. Blaser y S. Nebiker. "PERFORMANCE EVALUATION OF A MOBILE MAPPING APPLICATION USING SMARTPHONES AND AUGMENTED REALITY FRAMEWORKS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (3 de agosto de 2020): 741–47. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-741-2020.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract. In this paper, we present a performance evaluation of our smartphone-based mobile mapping application based on an augmented reality (AR) framework in demanding outdoor environments. The implementation runs on Android and iOS devices and demonstrates the great potential of smartphone-based 3D mobile mapping. The application includes several functionalities such as device tracking, coordinate, and distance measuring as well as capturing georeferenced imagery. We evaluated our prototype system by comparing measured points from the tracked device with ground control points in an outdoor environment with four different campaigns. The campaigns consisted of open and closed-loop trajectories and different ground surfaces such as grass, concrete and gravel. Two campaigns passed a stairway in either direction. Our results show that the absolute 3D accuracy of device tracking with state-of-the-art AR framework on a standard smartphone is around 1% of the travelled distance and that the local 3D accuracy reaches sub-decimetre level.
39

Dash, Jonathan, Grant Pearse y Michael Watt. "UAV Multispectral Imagery Can Complement Satellite Data for Monitoring Forest Health". Remote Sensing 10, n.º 8 (3 de agosto de 2018): 1216. http://dx.doi.org/10.3390/rs10081216.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The development of methods that can accurately detect physiological stress in forest trees caused by biotic or abiotic factors is vital for ensuring productive forest systems that can meet the demands of the Earth’s population. The emergence of new sensors and platforms presents opportunities to augment traditional practices by combining remotely-sensed data products to provide enhanced information on forest condition. We tested the sensitivity of multispectral imagery collected from time-series unmanned aerial vehicle (UAV) and satellite imagery to detect herbicide-induced stress in a carefully controlled experiment carried out in a mature Pinus radiata D. Don plantation. The results revealed that both data sources were sensitive to physiological stress in the study trees. The UAV data were more sensitive to changes at a finer spatial resolution and could detect stress down to the level of individual trees. The satellite data tested could only detect physiological stress in clusters of four or more trees. Resampling the UAV imagery to the same spatial resolution as the satellite imagery revealed that the differences in sensitivity were not solely the result of spatial resolution. Instead, vegetation indices suited to the sensor characteristics of each platform were required to optimise the detection of physiological stress from each data source. Our results define both the spatial detection threshold and the optimum vegetation indices required to implement monitoring of this forest type. A comparison between time-series datasets of different spectral indices showed that the two sensors are compatible and can be used to deliver an enhanced method for monitoring physiological stress in forest trees at various scales. We found that the higher resolution UAV imagery was more sensitive to fine-scale instances of herbicide induced physiological stress than the RapidEye imagery. Although less sensitive to smaller phenomena the satellite imagery was found to be very useful for observing trends in physiological stress over larger areas.
40

Mahmoud, Lama Saad El-Din, Nawal Abd EL-Raouf Abu Shady y Ehab Shaker Hafez. "Motor imagery training with augmented cues of motor learning on cognitive functions in patients with Parkinsonism". International Journal of Therapy and Rehabilitation 25, n.º 1 (2 de enero de 2018): 13–19. http://dx.doi.org/10.12968/ijtr.2018.25.1.13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Sundt, Håkon, Knut Alfredsen y Atle Harby. "Regionalized Linear Models for River Depth Retrieval Using 3-Band Multispectral Imagery and Green LIDAR Data". Remote Sensing 13, n.º 19 (29 de septiembre de 2021): 3897. http://dx.doi.org/10.3390/rs13193897.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Bathymetry is of vital importance in river studies but obtaining full-scale riverbed maps often requires considerable resources. Remote sensing imagery can be used for efficient depth mapping in both space and time. Multispectral image depth retrieval requires imagery with a certain level of quality and local in-situ depth observations for the calculation and verification of models. To assess the potential of providing extensive depth maps in rivers lacking local bathymetry, we tested the application of three platform-specific, regionalized linear models for depth retrieval across four Norwegian rivers. We used imagery from satellite platforms Worldview-2 and Sentinel-2, along with local aerial images to calculate the intercept and slope vectors. Bathymetric input was provided using green Light Detection and Ranging (LIDAR) data augmented by sonar measurements. By averaging platform-specific intercept and slope values, we calculated regionalized linear models and tested model performance in each of the four rivers. While the performance of the basic regional models was comparable to local river-specific models, regional models were improved by including the estimated average depth and a brightness variable. Our results show that regionalized linear models for depth retrieval can potentially be applied for extensive spatial and temporal mapping of bathymetry in water bodies where local in-situ depth measurements are lacking.
42

Mei, S. y R. Paulen. "Using multi-beam RADARSAT-1 imagery to augment mapping surficial geology in northwest Alberta, Canada". Canadian Journal of Remote Sensing 35, n.º 1 (enero de 2009): 1–22. http://dx.doi.org/10.5589/m08-077.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Liu, Karen P. Y., Chetwyn C. H. Chan, Rebecca S. M. Wong, Ivan W. L. Kwan, Christina S. F. Yau, Leonard S. W. Li y Tatia M. C. Lee. "A Randomized Controlled Trial of Mental Imagery Augment Generalization of Learning in Acute Poststroke Patients". Stroke 40, n.º 6 (junio de 2009): 2222–25. http://dx.doi.org/10.1161/strokeaha.108.540997.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Hu, Yuxin, Yini Li y Zongxu Pan. "A Dual-Polarimetric SAR Ship Detection Dataset and a Memory-Augmented Autoencoder-Based Detection Method". Sensors 21, n.º 24 (19 de diciembre de 2021): 8478. http://dx.doi.org/10.3390/s21248478.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
With the development of imaging and space-borne satellite technology, a growing number of multipolarized SAR imageries have been implemented for object detection. However, most of the existing public SAR ship datasets are grayscale images under single polarization mode. To make full use of the polarization characteristics of multipolarized SAR, a dual-polarimetric SAR dataset specifically used for ship detection is presented in this paper (DSSDD). For construction, 50 dual-polarimetric Sentinel-1 SAR images were cropped into 1236 image slices with the size of 256 × 256 pixels. The variances and covariance of both VV and VH polarization were fused into R,G,B channels of the pseudo-color image. Each ship was labeled with both a rotatable bounding box (RBox) and a horizontal bounding box (BBox). Apart from 8-bit pseudo-color images, DSSDD also provides 16-bit complex data for readers. Two prevalent object detectors R3Det and Yolo-v4 were implemented on DSSDD to establish the baselines of the detectors with the RBox and BBox respectively. Furthermore, we proposed a weakly supervised ship detection method based on anomaly detection via advanced memory-augmented autoencoder (MemAE), which can significantly remove false alarms generated by the two-parameter CFAR algorithm applied upon our dual-polarimetric dataset. The proposed advanced MemAE method has the advantages of a lower annotation workload, high efficiency, good performance even compared with supervised methods, making it a promising direction for ship detection in dual-polarimetric SAR images. The dataset is available on github.
45

Dijaya, Rohman, Noor Mayaminiy Maulidah y Dahlan Abdullah. "Flashcard computer generated imagery medicinal plant for orthopedagogic education". MATEC Web of Conferences 197 (2018): 15005. http://dx.doi.org/10.1051/matecconf/201819715005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The Indonesia natural wealth of tropical forest store various plants such as ornamental plants, fruits, vegetables, spices and medicinal plants. Medicinal plants are a variety of plants that are recognized as plants for medicines. However, due to the lack of community knowledge about medicinal plants. So it takes the application of learning about the benefits of medicinal plants to the community, especially to children. Orthopedagogic Orthodontic Objects are exceptional children, who have abnormalities that require special educator services. Learning media associated with motor censorship can overcome the limitations of deaf and tuneless children who can improve the motor skills of the child. Because children with hearing impairment and speech have a lack of understanding of spoken and written language. Development of Computer Science technology today, thus encouraging the educational process to be more interesting and applicable in order to improve the quality of education media and learning interests of learners. Augmented Reality (AR) learning media is a technique of displaying objects directly by directing the camera to a real (marker) object. The aim of current developd application are to show 3 Dimensional interactive learning media using a marker of flashcards about medicinal plants as many as 20 types of medicinal plants. This is intended to facilitate the user especially on orthopedagogic education in recognizing the types of plants that are efficacious for treatment.
46

Szantoi, Zoltan, Scot E. Smith, Giovanni Strona, Lian Pin Koh y Serge A. Wich. "Mapping orangutan habitat and agricultural areas using Landsat OLI imagery augmented with unmanned aircraft system aerial photography". International Journal of Remote Sensing 38, n.º 8-10 (23 de enero de 2017): 2231–45. http://dx.doi.org/10.1080/01431161.2017.1280638.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Schack, L., U. Soergel y C. Heipke. "GRAPH MATCHING FOR THE REGISTRATION OF PERSISTENT SCATTERERS TO OPTICAL OBLIQUE IMAGERY". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-7 (7 de junio de 2016): 195–202. http://dx.doi.org/10.5194/isprsannals-iii-7-195-2016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Matching Persistent Scatterers (PS) to airborne optical imagery is one possibility to augment applications and deepen the understanding of SAR processing and products. While recently this data registration task was done with PS and optical nadir images the alternatively available optical oblique imagery is mostly neglected. Yet, the sensing geometry of oblique images is very similar in terms of viewing direction with respect to SAR.We exploit the additional information coming with these optical sensors to assign individual PS to single parts of buildings. The key idea is to incorporate topology information which is derived by grouping regularly aligned PS at facades and use it together with a geometry based measure in order to establish a consistent and meaningful matching result. We formulate this task as an optimization problem and derive a graph matching based algorithm with guaranteed convergence in order to solve it. Two exemplary case studies show the plausibility of the presented approach.
48

Schack, L., U. Soergel y C. Heipke. "GRAPH MATCHING FOR THE REGISTRATION OF PERSISTENT SCATTERERS TO OPTICAL OBLIQUE IMAGERY". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-7 (7 de junio de 2016): 195–202. http://dx.doi.org/10.5194/isprs-annals-iii-7-195-2016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Matching Persistent Scatterers (PS) to airborne optical imagery is one possibility to augment applications and deepen the understanding of SAR processing and products. While recently this data registration task was done with PS and optical nadir images the alternatively available optical oblique imagery is mostly neglected. Yet, the sensing geometry of oblique images is very similar in terms of viewing direction with respect to SAR.We exploit the additional information coming with these optical sensors to assign individual PS to single parts of buildings. The key idea is to incorporate topology information which is derived by grouping regularly aligned PS at facades and use it together with a geometry based measure in order to establish a consistent and meaningful matching result. We formulate this task as an optimization problem and derive a graph matching based algorithm with guaranteed convergence in order to solve it. Two exemplary case studies show the plausibility of the presented approach.
49

Fetanat, Gholamreza, Abdollah Homaifar y Kenneth R. Knapp. "Objective Tropical Cyclone Intensity Estimation Using Analogs of Spatial Features in Satellite Data". Weather and Forecasting 28, n.º 6 (1 de diciembre de 2013): 1446–59. http://dx.doi.org/10.1175/waf-d-13-00006.1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Abstract An objective method for estimating tropical cyclone (TC) intensity using historical hurricane satellite data (HURSAT) is developed and tested. This new method, referred to as feature analogs in satellite imagery (FASI), requires a TC's center location to extract azimuthal brightness temperature (BT) profiles from current imagery as well as BT profiles from imagery 6, 12, and 24 h prior. Instead of using regression techniques, the estimated TC intensity is determined from the 10 closest analogs to this TC based on the BT profiles using a k-nearest-neighbor algorithm. The FASI technique was trained and validated using intensity data from aircraft reconnaissance in the North Atlantic Ocean, where the data were restricted to include storms that are over water and south of 45°N. This subset comprised 2016 observations from 165 storms during 1988–2006. Several tests were implemented to statistically justify the FASI algorithm using n-fold cross validation. The resulting average mean absolute intensity error was 10.9 kt (50% of estimates are within 10 kt, 1 kt = 0.51 m s−1) or 8.4 mb (50% of estimates are within 8 mb); its accuracy is on par with other objective techniques. This approach has the potential to provide global TC intensity estimates that could augment intensity estimates made by other objective techniques.
50

Al Rahhal, Mohamad Mahmoud, Yakoub Bazi, Rami M. Jomaa, Ahmad AlShibli, Naif Alajlan, Mohamed Lamine Mekhalfi y Farid Melgani. "COVID-19 Detection in CT/X-ray Imagery Using Vision Transformers". Journal of Personalized Medicine 12, n.º 2 (18 de febrero de 2022): 310. http://dx.doi.org/10.3390/jpm12020310.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The steady spread of the 2019 Coronavirus disease has brought about human and economic losses, imposing a new lifestyle across the world. On this point, medical imaging tests such as computed tomography (CT) and X-ray have demonstrated a sound screening potential. Deep learning methodologies have evidenced superior image analysis capabilities with respect to prior handcrafted counterparts. In this paper, we propose a novel deep learning framework for Coronavirus detection using CT and X-ray images. In particular, a Vision Transformer architecture is adopted as a backbone in the proposed network, in which a Siamese encoder is utilized. The latter is composed of two branches: one for processing the original image and another for processing an augmented view of the original image. The input images are divided into patches and fed through the encoder. The proposed framework is evaluated on public CT and X-ray datasets. The proposed system confirms its superiority over state-of-the-art methods on CT and X-ray data in terms of accuracy, precision, recall, specificity, and F1 score. Furthermore, the proposed system also exhibits good robustness when a small portion of training data is allocated.

Pasar a la bibliografía