Academic literature on the topic 'Virtual retinal display'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Virtual retinal display.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Virtual retinal display"

1

Pryor, Homer L., Thomas A. Furness, and Erik Viirre. "Demonstration of the Virtual Retinal Display: A New Display Technology Using Scanned Laser Light." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 16 (October 1998): 1149. http://dx.doi.org/10.1177/154193129804201609.

Full text
Abstract:
The Virtual Retinal Display (VRD) is a new display technology that scans modulated low energy laser light directly onto the viewer's retina to create a perception of a virtual image. This approach provides an unprecedented way to stream photons to the receptors of the eye, affording higher resolution, increased luminance, and potentially a wider field-of-view than previously possible in head coupled displays. The VRD uses video signals from a graphics board or a video camera to modulate low power coherent light from a red laser diode. A mechanical resonant scanner and galvanometer mirror then scan the photon stream from the laser diode in two dimensions through reflective elements and semitransparent combiner such that a raster of light is imaged on the retina. The pixels produced on the retina have no persistence, yet they create the perception of a brilliant full color, and flicker-free virtual image. Developmental models of the VRD have been shown to produce VGA and SVGA image quality. This demonstration exhibits the portable monochrome VRD
APA, Harvard, Vancouver, ISO, and other styles
2

Pryor, Homer L., Thomas A. Furness, and Erik Viirre. "The Virtual Retinal Display: A new Display Technology using Scanned Laser Light." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 22 (October 1998): 1570–74. http://dx.doi.org/10.1177/154193129804202208.

Full text
Abstract:
The Virtual Retinal Display (VRD) is a new display technology that scans modulated low energy laser light directly onto the viewer's retina to create a perception of a virtual image. This approach provides an unprecedented way to stream photons to the receptors of the eye, affording higher resolution, increased luminance, and potentially a wider field-of-view than previously possible in head coupled displays. The VRD uses video signals from a graphics board or a video camera to modulate low power coherent light from red, green and blue photon sources such as gas lasers, laser diodes and/or light emitting diodes. The modulated light is then combined and piped through a single mode optical fiber. A mechanical resonant scanner and galvanometer mirror then scan the photon stream from the fiber in two dimensions through reflective elements and semitransparent combiner such that a raster of light is imaged on the retina. The pixels produced on the retina have no persistence, yet they create the perception of a brilliant full color, and flicker-free virtual image. Developmental models of the VRD have been shown to produce VGA and SVGA image quality. This paper describes the VRD technology, the advantages that it provides, and areas of human factors research ensuing from scanning light directly onto the retina. Future applications of the VRD are discussed along with new research findings regarding the use of the VRD for people with low vision
APA, Harvard, Vancouver, ISO, and other styles
3

Karmuse, Sachin Mohan, and Dr Arun L. Kakhandki. "A Review on Real Time Heart Rate Monitoring System using USB camera." International Journal of Engineering and Computer Science 9, no. 2 (February 3, 2020): 24934–39. http://dx.doi.org/10.18535/ijecs/v9i2.4434.

Full text
Abstract:
Technological advancement nowadays is moving to a faster pace. The latest display technology -Touch Screen Display, commonly used in our smart phones and tablet computers will move to a mere history in the coming future. Lack of space is one of major problem faced by screen displays. This emerging new display technology will replace this touch screen environment and will solve the problems at higher level, making life more comfortable. The main aim of the Screenless Display is to display or transmit the information without the help of a screen or the projector. Using this display, we can directly project images onto the human retina, open space and even to the human brain. It avoids the need of high weight hardware and it will provide privacy at a high rate. This field came into progress during the year 2013 by the arrival of products like holographic videos, virtual reality headsets, retinal displays, mobiles for elderly, eye tap etc. At present, we can say that only part of the Screenless Display Technology is brought up which means that more advancement is necessary for a boost in the technology. This problem will surely provide a pathway for screenless display.
APA, Harvard, Vancouver, ISO, and other styles
4

Korot, Edward, Aristomenis Thanos, Bozho Todorich, Prethy Rao, Maxwell S. Stem, and George A. Williams. "Use of the Avegant Glyph Head-Mounted Virtual Retinal Projection Display to Perform Vitreoretinal Surgery." Journal of VitreoRetinal Diseases 2, no. 1 (November 10, 2017): 22–25. http://dx.doi.org/10.1177/2474126417738613.

Full text
Abstract:
Objective: To evaluate the use of a novel retinal projection display in vitreoretinal surgery. Methods: The Avegant Glyph virtual retinal display, which uses a light-emitting diode and micromirror array to project directly onto the retinas of the user, was evaluated. This unit was modified for better operating room characteristics. It was evaluated by 6 surgeons performing mock vitreoretinal surgeries. Results: The majority reported high 3-dimensional (3-D) depth rendition, little hindrance to communication, and high confidence to perform procedures. Due to a small ocular size, surgeons conveyed that the Glyph provides a novel enhanced view for performing procedures benefiting from simultaneous intra- and extraocular visualization such as scleral depression. Safety analysis by performing fundus autofluorescence after 2 hours of Glyph operation did not reveal any gross qualitative change. Conclusion: Use of the Avegant Glyph to perform vitreoretinal surgery may provide ergonomic advantages, while its visualization and high 3-D stereoscopic depth rendition instill high surgeon confidence to safely perform procedures. We are performing further studies with objective data to validate the potential of this technology.
APA, Harvard, Vancouver, ISO, and other styles
5

Menozzi, M., H. Krueger, P. Lukowicz, and G. Tröster. "Netzhautanzeigesystem („virtual retinal display“) mit Knotenpunktabbildung eines Laserstrahles: Konstruktionsbeispiel und Bewertung des subjektiven Helligkeitseindruckes - Perception of Brightness with a Virtual Retinal Display Using Badal Projection." Biomedizinische Technik/Biomedical Engineering 46, no. 3 (2001): 55–62. http://dx.doi.org/10.1515/bmte.2001.46.3.55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McQuaide, Sarah C., Eric J. Seibel, Robert Burstein, and Thomas A. Furness. "50.4: Three-dimensional Virtual Retinal Display System using a Deformable Membrane Mirror." SID Symposium Digest of Technical Papers 33, no. 1 (2002): 1324. http://dx.doi.org/10.1889/1.1830190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suthau, Tim, and Olaf Hellwich. "Accuracy analysis of superimposition on a virtual retinal display in computer-aided surgery." International Congress Series 1281 (May 2005): 1293. http://dx.doi.org/10.1016/j.ics.2005.03.204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oehme, Olaf, Ludger Schmidt, and Holger Luczak. "Comparison Between the Strain IndicatorHRVof a Head-Based Virtual Retinal Display and LC-Head Mounted Displays for Augmented Reality." International Journal of Occupational Safety and Ergonomics 9, no. 4 (January 2003): 419–30. http://dx.doi.org/10.1080/10803548.2003.11076579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ellis, Stephen R., and Urs J. Bucher. "Distance Perception of Stereoscopically Presented Virtual Objects Optically Superimposed on Physical Objects by a Head-Mounted See-Through Display." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 38, no. 19 (October 1994): 1300–1304. http://dx.doi.org/10.1177/154193129403801911.

Full text
Abstract:
The influence of physically presented background stimuli on distance judgements to optically overlaid, stereoscopic virtual images has been studied using head-mounted stereoscopic, virtual image displays. Positioning of an opaque physical object either at the perceived depth of the virtual image or at a position substantially in front of it, has been observed to cause the virtual image to apparently move closer to the observer. In the case of physical objects positioned substantially in front of the virtual image, subjects often perceive the opaque object as transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not influenced by the strengthening of occlusion cues but is influenced by motion of the physical objects which would attract the subjects ocular vergence. The observed effect appears to be associated with the relative conspicuousness of the overlaid virtual image and the background. This effect may be related to Foley's models of open-loop stereoscopic pointing errors which attributed the stereoscopic distance errors to misjudgment of a reference point for interpretation of retinal disparities. Some implications for the design of see-through displays for manufacturing will also be discussed briefly.
APA, Harvard, Vancouver, ISO, and other styles
10

Qi, Min, Shanshan Cui, Qianmin Du, Yuelei Xu, and David F. McAllister. "Visual Fatigue Alleviating in Stereo Imaging of Anaglyphs by Reducing Retinal Rivalry and Color Distortion Based on Mobile Virtual Reality Technology." Wireless Communications and Mobile Computing 2021 (September 15, 2021): 1–10. http://dx.doi.org/10.1155/2021/1285712.

Full text
Abstract:
Stereoscopic display is the means of showing scenes in Virtual Reality (VR). As a type of stereo images, anaglyphs can be displayed not only on the screen, but are currently the only solution of stereo images that can be displayed on paper. However, its deficiencies, like retinal rivalry and color distortion, could cause visual fatigue. To address this issue, an algorithm is proposed for anaglyph generation. Unlike previous studies only considering one aspect, it considers both retinal rivalry and color distortion at the same time. The algorithm works in the CIE L ∗ a ∗ b ∗ color space and focuses on matching the perceptual color attributes especially the hue, rather than directly minimizes the sum of the distances between the perceived anaglyph color and the stereo image pair. In addition, the paper builds a relatively complete framework to generate anaglyphs so that it is more controllable to adjust the parameters and choose the appropriate process. The subjective tests are conducted to compare the results with several techniques which generate anaglyphs including empirical methods and computing methods. Results show that the proposed algorithm has a good performance.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Virtual retinal display"

1

Lindhoff, Mattias. "Är tiden inne för virtual reality i hemmet? - En experimentell studie av virtual reality med 3D och head tracking." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20372.

Full text
Abstract:
Genom åren har intresset för och satsningar på underhållning i tredimensionellt format (3D) gått i vågor. Idag har de flesta biografer stöd för att visa 3D-filmer. Utöver detta intresse har det på senare år även börjat komma mer teknik för 3D i hemmet. Det har också introducerats många nya mer immersiva och intuitiva inmatnings-enheter som bidrar till under-hållning med element av virtual reality hemma. Företagen Sony, Nintendo och Microsoft har alla lanserat olika typer av avancerade tekniker för sådana inmatnings-enheter till sina spel-system. Dessa tekniker bidrar på olika sätt till mer immersiv underhållning. På de sätt som många av teknikerna används idag blir däremot interaktionen fortfarande hämmad av att dessa kräver att man står, respektive tittar, i en viss riktning.Rapporten återger en experimentell studie som tar sikte på att undersöka om det är möjligt att med kommersiellt tillgänliga medel skapa immersiv virtual reality som är portabel och buren – för underhållning i hemmet. Inledningsvis redogörs kortfattat för människans upp-fattning av djup samt hur olika 3D-displayer fungerar. Härefter kommer vi in på virtual reality och betydelsen av en hög nivå av immersion i detta sammanhang. Beträffande virtual reality kommer däremot utgångspunkten vara mer teoretisk, för att ge en bild av åt vilket håll utvecklingen går. Denna del går därmed längre än vad experimentet omfattar, eftersom denna teori inte ännu fullt ut omsatts i praktiken. Hypotesen för experimentet är att tekniken ”head tracking”, i form av en huvudburen rörelsedetektor som känner av huvudets orientering, till viss del kan vara en lösning på problemet med immersion – eftersom jag som användare då inte är hänvisad till en viss plats. Slutligen analyseras såväl teori som experiment och man kommer fram till att VRD är en möjlig lovande framtida teknik. Hypotesen bekräftas till viss del och rapporten mynnar ut i en slutreflektion där det konstateras att teknik för att skapa ökad immersion och VR hemma finns tillgänglig – även om en del ytterligare arbete för datahantering skulle krävas för att optimera denna.
Through the years, interest in and focus on entertainment in three dimensional form (3D) has gone in waves. Today, most cinemas have support for showing 3D-movies. In addition to this interest, an increasing amount of technology for 3D at home has become available in recent years. A number of different new, more immersive and intuitive input devices with elements of virtual reality for home use, have also been introduced. The companies Sony, Nintendo and Microsoft have all launched various types of such advanced input technology for their game consoles. These techniques contribute in various ways to more immersive entertain-ment. In the way many of these technical solutions are used today, they are still limited by the requirement of standing and looking in a specific direction. The report reflects an experimental study that aims to explore the feasibility of using commercially obtainable material to create immersive virtual reality for home entertainment, which is portable and wearable. Initially it explains the basics of human depth perception, and how different 3D displays work. Hereafter, we will look at virtual reality and the im-portance of a high level of immersion in this context. In regards to virtual reality however, the starting point will be of a more theoretical nature, to give an idea of in which direction the development is heading. This part thereby goes further than what the experiment covers, because of this theory not yet beeing fully applied in practice. The hypothesis for the experiment is that the technology "head tracking", in the form of a head-mounted motion-sensor that detects the orientation of the head, in part, may be a solution to the problem of immersion – as the user isn’t dependant on a specific location.Finally an analysis of both theory and experiment is made in which it is concluded that VRD might be a promissing future technology. The hypothesis is partially confirmed and the report culminates in a final reflection where it is found that technology for creating a higher level of immersion and VR at home is available – even though some additional work with data handling would be required.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Virtual retinal display"

1

Kaczmarek, Kurt A., and Paul Bach-Y-Rita. "Tactile Displays." In Virtual Environments and Advanced Interface Design. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195075557.003.0019.

Full text
Abstract:
The average adult has approximately 2m2 of skin (Gibson, 1968), about 90% hairy, and remainder smooth or glabrous. Although the glabrous areas are more sensitive than the hairy, both types are highly innervated with sensory receptors and nerves (Sinclair, 1981). Tactile displays have utilized both glabrous and hairy skin, the type selected being relative to the sensory display needs of the various investigators. There are several advantages for selecting the skin as the sensory surface to receive information. (1) It is accessible, extensive in area, richly innervated, and capable of precise discrimination. Further, when the skin of the forehead or trunk is used, the tactile display system does not interfere materially with motor or other sensory functions. (2) The skin shows a number of functional similarities to the retina of the eye in its capacity to mediate information. Large parts of the body surface are relatively flat, and the receptor surfaces of the skin, like the retina, are capable of mediating displays in two spatial dimensions as well as having the potential for temporal integration (summation over time). Thus, there is generally no need for complex topological transformation or for temporal coding of pictorial information for direct presentation onto the accessible areas of the skin, although temporal display factors have been explored with the goal of transmitting spatial information across the skin more quickly than is possible with present systems (Kaczmarek et al., 1984; Bach-y-Rita and Hughes, 1985; Kaczmarek et al., 1985; Loomis and Lederman, 1986). Spatial patterns learned visually can be identified factually, and vice versa (Epstein et al., 1989; Hughes et al., 1990). (3) Certain types of sensory inhibition, including the Mach band phenomenon and other examples of lateral inhibition originally demonstrated for vision, are equally demonstrable in the skin (Bekesy, 1967). (4) Finally, there is evidence that the skin normally functions as an exteroceptor at least in a limited sense: Katz noted that to some extent both vibration and temperature changes can be felt at a distance (Krueger, 1970). For example, a blind person can “feel” the approach of a warm cylinder at three times the distance required by the sighted individual (Krueger, 1970).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Virtual retinal display"

1

"Virtual retinal display technology." In 17th DASC. AIAA/IEEE/SAE Digital Avionics Systems Conference. Proceedings. IEEE, 1998. http://dx.doi.org/10.1109/dasc.1998.741542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Xiuping, Qin He, Yuling Feng, and KeCheng Feng. "Principle of helmet-mounted virtual retinal display." In Photonics Asia 2002, edited by Dahsiung Hsu, Jiabi Chen, and Yunlong Sheng. SPIE, 2002. http://dx.doi.org/10.1117/12.481487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kollin, Joel S., and Michael R. Tidwell. "Optical engineering challenges of the virtual retinal display." In SPIE's 1995 International Symposium on Optical Science, Engineering, and Instrumentation, edited by Jose M. Sasian. SPIE, 1995. http://dx.doi.org/10.1117/12.216403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kenyon, Anne, John van Rosendale, Samuel Fulcomer, and David Laidlaw. "The design of a retinal resolution fully immersive VR display." In 2014 IEEE Virtual Reality (VR). IEEE, 2014. http://dx.doi.org/10.1109/vr.2014.6802065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jang, Changwon, Kiseung Bang, Jonghyun Kim, Youngmo Jeong, and Byoungho Lee. "Full color virtual retinal display using a holographic optical element." In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2017. http://dx.doi.org/10.1364/3d.2017.jtu5a.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Wenbo, Chao Ping Chen, Yang Li, Bing Yu, Nizamuddin Maitlo, Lantian Mi, and Yuanchao Zhou. "A retinal-projection-based near-eye display for virtual reality." In Digital Optics for Immersive Displays (DOID18), edited by Wolfgang Osten, Hagen Stolle, and Bernard C. Kress. SPIE, 2018. http://dx.doi.org/10.1117/12.2315672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Mohan, and Hong Hua. "Method for evaluating 3D display systems based on perceived retinal image." In Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), edited by Bernard C. Kress and Christophe Peroz. SPIE, 2020. http://dx.doi.org/10.1117/12.2543416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mi, Lantian, Chao Ping Chen, Wenbo Zhang, Jie Chen, Yuan Liu, and Changzhao Zhu. "A retinal-scanning-based near-eye display with diffractive optical element." In Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), edited by Bernard C. Kress and Christophe Peroz. SPIE, 2020. http://dx.doi.org/10.1117/12.2546941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vu, Con Tran, Simon Stock, Lintao T. Fan, and Wilhelm Stork. "Highly parallelized rendering of the retinal image through a computer-simulated human eye for the design of virtual reality head-mounted displays." In Optics, Photonics and Digital Technologies for Imaging Applications VI, edited by Peter Schelkens and Tomasz Kozacki. SPIE, 2020. http://dx.doi.org/10.1117/12.2555872.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Virtual retinal display"

1

Rash, Clarence E., Thomas H. Harding, John S. Martin, and Howard H. Beasley. Concept Phase Evaluation of the Microvision, Inc. Aircrew Integrated Helmet System HGU-56P Virtual Retinal Display,. Fort Belvoir, VA: Defense Technical Information Center, August 1999. http://dx.doi.org/10.21236/ada367318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography