To see the other types of publications on this topic, follow the link: Virtual retinal display.

Journal articles on the topic 'Virtual retinal display'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 journal articles for your research on the topic 'Virtual retinal display.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pryor, Homer L., Thomas A. Furness, and Erik Viirre. "Demonstration of the Virtual Retinal Display: A New Display Technology Using Scanned Laser Light." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 16 (October 1998): 1149. http://dx.doi.org/10.1177/154193129804201609.

Full text
Abstract:
The Virtual Retinal Display (VRD) is a new display technology that scans modulated low energy laser light directly onto the viewer's retina to create a perception of a virtual image. This approach provides an unprecedented way to stream photons to the receptors of the eye, affording higher resolution, increased luminance, and potentially a wider field-of-view than previously possible in head coupled displays. The VRD uses video signals from a graphics board or a video camera to modulate low power coherent light from a red laser diode. A mechanical resonant scanner and galvanometer mirror then scan the photon stream from the laser diode in two dimensions through reflective elements and semitransparent combiner such that a raster of light is imaged on the retina. The pixels produced on the retina have no persistence, yet they create the perception of a brilliant full color, and flicker-free virtual image. Developmental models of the VRD have been shown to produce VGA and SVGA image quality. This demonstration exhibits the portable monochrome VRD
APA, Harvard, Vancouver, ISO, and other styles
2

Pryor, Homer L., Thomas A. Furness, and Erik Viirre. "The Virtual Retinal Display: A new Display Technology using Scanned Laser Light." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 22 (October 1998): 1570–74. http://dx.doi.org/10.1177/154193129804202208.

Full text
Abstract:
The Virtual Retinal Display (VRD) is a new display technology that scans modulated low energy laser light directly onto the viewer's retina to create a perception of a virtual image. This approach provides an unprecedented way to stream photons to the receptors of the eye, affording higher resolution, increased luminance, and potentially a wider field-of-view than previously possible in head coupled displays. The VRD uses video signals from a graphics board or a video camera to modulate low power coherent light from red, green and blue photon sources such as gas lasers, laser diodes and/or light emitting diodes. The modulated light is then combined and piped through a single mode optical fiber. A mechanical resonant scanner and galvanometer mirror then scan the photon stream from the fiber in two dimensions through reflective elements and semitransparent combiner such that a raster of light is imaged on the retina. The pixels produced on the retina have no persistence, yet they create the perception of a brilliant full color, and flicker-free virtual image. Developmental models of the VRD have been shown to produce VGA and SVGA image quality. This paper describes the VRD technology, the advantages that it provides, and areas of human factors research ensuing from scanning light directly onto the retina. Future applications of the VRD are discussed along with new research findings regarding the use of the VRD for people with low vision
APA, Harvard, Vancouver, ISO, and other styles
3

Karmuse, Sachin Mohan, and Dr Arun L. Kakhandki. "A Review on Real Time Heart Rate Monitoring System using USB camera." International Journal of Engineering and Computer Science 9, no. 2 (February 3, 2020): 24934–39. http://dx.doi.org/10.18535/ijecs/v9i2.4434.

Full text
Abstract:
Technological advancement nowadays is moving to a faster pace. The latest display technology -Touch Screen Display, commonly used in our smart phones and tablet computers will move to a mere history in the coming future. Lack of space is one of major problem faced by screen displays. This emerging new display technology will replace this touch screen environment and will solve the problems at higher level, making life more comfortable. The main aim of the Screenless Display is to display or transmit the information without the help of a screen or the projector. Using this display, we can directly project images onto the human retina, open space and even to the human brain. It avoids the need of high weight hardware and it will provide privacy at a high rate. This field came into progress during the year 2013 by the arrival of products like holographic videos, virtual reality headsets, retinal displays, mobiles for elderly, eye tap etc. At present, we can say that only part of the Screenless Display Technology is brought up which means that more advancement is necessary for a boost in the technology. This problem will surely provide a pathway for screenless display.
APA, Harvard, Vancouver, ISO, and other styles
4

Korot, Edward, Aristomenis Thanos, Bozho Todorich, Prethy Rao, Maxwell S. Stem, and George A. Williams. "Use of the Avegant Glyph Head-Mounted Virtual Retinal Projection Display to Perform Vitreoretinal Surgery." Journal of VitreoRetinal Diseases 2, no. 1 (November 10, 2017): 22–25. http://dx.doi.org/10.1177/2474126417738613.

Full text
Abstract:
Objective: To evaluate the use of a novel retinal projection display in vitreoretinal surgery. Methods: The Avegant Glyph virtual retinal display, which uses a light-emitting diode and micromirror array to project directly onto the retinas of the user, was evaluated. This unit was modified for better operating room characteristics. It was evaluated by 6 surgeons performing mock vitreoretinal surgeries. Results: The majority reported high 3-dimensional (3-D) depth rendition, little hindrance to communication, and high confidence to perform procedures. Due to a small ocular size, surgeons conveyed that the Glyph provides a novel enhanced view for performing procedures benefiting from simultaneous intra- and extraocular visualization such as scleral depression. Safety analysis by performing fundus autofluorescence after 2 hours of Glyph operation did not reveal any gross qualitative change. Conclusion: Use of the Avegant Glyph to perform vitreoretinal surgery may provide ergonomic advantages, while its visualization and high 3-D stereoscopic depth rendition instill high surgeon confidence to safely perform procedures. We are performing further studies with objective data to validate the potential of this technology.
APA, Harvard, Vancouver, ISO, and other styles
5

Menozzi, M., H. Krueger, P. Lukowicz, and G. Tröster. "Netzhautanzeigesystem („virtual retinal display“) mit Knotenpunktabbildung eines Laserstrahles: Konstruktionsbeispiel und Bewertung des subjektiven Helligkeitseindruckes - Perception of Brightness with a Virtual Retinal Display Using Badal Projection." Biomedizinische Technik/Biomedical Engineering 46, no. 3 (2001): 55–62. http://dx.doi.org/10.1515/bmte.2001.46.3.55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McQuaide, Sarah C., Eric J. Seibel, Robert Burstein, and Thomas A. Furness. "50.4: Three-dimensional Virtual Retinal Display System using a Deformable Membrane Mirror." SID Symposium Digest of Technical Papers 33, no. 1 (2002): 1324. http://dx.doi.org/10.1889/1.1830190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suthau, Tim, and Olaf Hellwich. "Accuracy analysis of superimposition on a virtual retinal display in computer-aided surgery." International Congress Series 1281 (May 2005): 1293. http://dx.doi.org/10.1016/j.ics.2005.03.204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oehme, Olaf, Ludger Schmidt, and Holger Luczak. "Comparison Between the Strain IndicatorHRVof a Head-Based Virtual Retinal Display and LC-Head Mounted Displays for Augmented Reality." International Journal of Occupational Safety and Ergonomics 9, no. 4 (January 2003): 419–30. http://dx.doi.org/10.1080/10803548.2003.11076579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ellis, Stephen R., and Urs J. Bucher. "Distance Perception of Stereoscopically Presented Virtual Objects Optically Superimposed on Physical Objects by a Head-Mounted See-Through Display." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 38, no. 19 (October 1994): 1300–1304. http://dx.doi.org/10.1177/154193129403801911.

Full text
Abstract:
The influence of physically presented background stimuli on distance judgements to optically overlaid, stereoscopic virtual images has been studied using head-mounted stereoscopic, virtual image displays. Positioning of an opaque physical object either at the perceived depth of the virtual image or at a position substantially in front of it, has been observed to cause the virtual image to apparently move closer to the observer. In the case of physical objects positioned substantially in front of the virtual image, subjects often perceive the opaque object as transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not influenced by the strengthening of occlusion cues but is influenced by motion of the physical objects which would attract the subjects ocular vergence. The observed effect appears to be associated with the relative conspicuousness of the overlaid virtual image and the background. This effect may be related to Foley's models of open-loop stereoscopic pointing errors which attributed the stereoscopic distance errors to misjudgment of a reference point for interpretation of retinal disparities. Some implications for the design of see-through displays for manufacturing will also be discussed briefly.
APA, Harvard, Vancouver, ISO, and other styles
10

Qi, Min, Shanshan Cui, Qianmin Du, Yuelei Xu, and David F. McAllister. "Visual Fatigue Alleviating in Stereo Imaging of Anaglyphs by Reducing Retinal Rivalry and Color Distortion Based on Mobile Virtual Reality Technology." Wireless Communications and Mobile Computing 2021 (September 15, 2021): 1–10. http://dx.doi.org/10.1155/2021/1285712.

Full text
Abstract:
Stereoscopic display is the means of showing scenes in Virtual Reality (VR). As a type of stereo images, anaglyphs can be displayed not only on the screen, but are currently the only solution of stereo images that can be displayed on paper. However, its deficiencies, like retinal rivalry and color distortion, could cause visual fatigue. To address this issue, an algorithm is proposed for anaglyph generation. Unlike previous studies only considering one aspect, it considers both retinal rivalry and color distortion at the same time. The algorithm works in the CIE L ∗ a ∗ b ∗ color space and focuses on matching the perceptual color attributes especially the hue, rather than directly minimizes the sum of the distances between the perceived anaglyph color and the stereo image pair. In addition, the paper builds a relatively complete framework to generate anaglyphs so that it is more controllable to adjust the parameters and choose the appropriate process. The subjective tests are conducted to compare the results with several techniques which generate anaglyphs including empirical methods and computing methods. Results show that the proposed algorithm has a good performance.
APA, Harvard, Vancouver, ISO, and other styles
11

D'Souza, Sandra, and N. Sriraam. "Design of EOG Signal Acquisition System Using Virtual Instrumentation." International Journal of Measurement Technologies and Instrumentation Engineering 4, no. 1 (January 2014): 1–16. http://dx.doi.org/10.4018/ijmtie.2014010101.

Full text
Abstract:
The design and development of cost effective rehabilitation aids is a challenging task for biomedical research community. The biopotentials such as EEG, EMG, ECG and EOG that are generated from human body help in controlling the external electronic devices. In the recent years, the EOG based assistive devices have gained importance in assisting paralyzed patients, due to their ability to perform operations controlled by retinal movements. This paper proposes a cost effective design and development of EOG signal acquisition system using virtual instrumentation. The hardware design comprises of two instrumentation amplifiers using AD620 for registering horizontal and vertical eye movements and filter circuits. A virtual instrumentation based front panel is designed to interface the hardware and to display the EOG signals. The resultant digitized EOG signal is further enhanced for driving assistive devices. The proposed EOG system makes use of virtual instrumentation and hence minimizes the design cost and increases the flexibility of the instrument. This paper presents the initial part of the research work which is aiming at a cost effective complete assistive device based on extracting the useful information from the eye movements. The qualitative validation of EOG signals recorded ensures the cost effective healthcare delivery for rehabilitation applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Chinthammit, Winyu, Eric J. Seibel, and Thomas A. Furness. "A Shared-Aperture Tracking Display for Augmented Reality." Presence: Teleoperators and Virtual Environments 12, no. 1 (February 2003): 1–18. http://dx.doi.org/10.1162/105474603763835305.

Full text
Abstract:
The operation and performance of a six degree-of-freedom (DOF) shared-aperture tracking system with image overlay is described. This unique tracking technology shares the same aperture or scanned optical beam with the visual display, virtual retinal display (VRD). This display technology provides high brightness in an AR helmet-mounted display, especially in the extreme environment of a military cockpit. The VRD generates an image by optically scanning visible light directly to the viewer's eye. By scanning both visible and infrared light, the head-worn display can be directly coupled to a head-tracking system. As a result, the proposed tracking system requires minimal calibration between the user's viewpoint and the tracker's viewpoint. This paper demonstrates that the proposed shared-aperture tracking system produces high accuracy and computational efficiency. The current proof-of-concept system has a precision of +/− 0.05 and +/− 0.01 deg. in the horizontal and vertical axes, respectively. The static registration error was measured to be 0.08 +/− 0.04 and 0.03 +/− 0.02 deg. for the horizontal and vertical axes, respectively. The dynamic registration error or the system latency was measured to be within 16.67 ms, equivalent to our display refresh rate of 60 Hz. In all testing, the VRD was fixed and the calibrated motion of a robot arm was tracked. By moving the robot arm within a restricted volume, this real-time shared-aperture method of tracking was extended to six-DOF measurements. Future AR applications of our shared-aperture tracking and display system will be highly accurate head tracking when the VRD is helmet mounted and worn within an enclosed space, such as an aircraft cockpit.
APA, Harvard, Vancouver, ISO, and other styles
13

Sato, Hirotaro, Yuki Morimoto, Gerard B. Remijn, and Takeharu Seno. "Differences in Three Vection Indices (Latency, Duration, and Magnitude) Induced by “Camera-Moving” and “Object-Moving” in a Virtual Computer Graphics World, Despite Similarity in the Retinal Images." i-Perception 11, no. 5 (September 2020): 204166952095843. http://dx.doi.org/10.1177/2041669520958430.

Full text
Abstract:
To create a self-motion (vection) situation in three-dimensional computer graphics (CG), there are mainly two ways: moving a camera toward an object (“camera moving”) or by moving the object and its surrounding environment toward the camera (“object moving”). As both methods vary considerably in the amount of computer calculations involved in generating CG, knowing how each method affects self-motion perception should be important to CG-creators and psychologists. Here, we simulated self-motion in a virtual three-dimensional CG-world, without stereoscopic disparity, which correctly reflected the lighting and glare. Self-motion was induced by “camera moving” or by “object moving,” which in the present experiments was done by moving a tunnel surrounding the camera toward the camera. This produced two retinal images that were virtually identical in Experiment 1 and very similar in Experiments 2 and 3. The stimuli were presented on a large plasma display to 15 naive participants and induced substantial vection. Three experiments comparing vection strength between the two methods found weak but significant differences. The results suggest that when creating CG visual experiences, “camera-moving” induces stronger vection.
APA, Harvard, Vancouver, ISO, and other styles
14

Kim, Nam-Gyoon, and Beom-Su Kim. "The Effect of Retinal Eccentricity on Visually Induced Motion Sickness and Postural Control." Applied Sciences 9, no. 9 (May 10, 2019): 1919. http://dx.doi.org/10.3390/app9091919.

Full text
Abstract:
The present study investigated the effect of retinal eccentricity on visually induced motion sickness (VIMS) and postural control. Participants wore a head-mounted display masked for the central 10° (peripheral vision), the peripheral except for the central 10° (central vision), or unmasked (control) to watch a highly immersive 3D virtual reality (VR) ride along China’s Great Wall. The Simulator Sickness Questionnaire was administered to assess VIMS symptoms before and after the VR exposure. In addition, postural sway data were collected via sensors attached to each participant’s head, torso, and hip. Results demonstrated that peripheral vision triggered the most severe symptoms of motion sickness, whereas full vision most perturbed posture. The latter finding contradicts previous research findings demonstrating the peripheral advantage of postural control. Although the source of compromised postural control under peripheral stimulation is not clear, the provocative nature of visual stimulation depicting a roller-coaster ride along a rugged path likely contributed to the contradictory findings. In contrast, motion sickness symptoms were least severe, and posture was most stable, under central vision. These findings provide empirical support for the tactic assumed by VR engineers who reduce the size of the field of view to ameliorate the symptoms of motion sickness.
APA, Harvard, Vancouver, ISO, and other styles
15

Cohen, Michael A., Thomas L. Botch, and Caroline E. Robertson. "The limits of color awareness during active, real-world vision." Proceedings of the National Academy of Sciences 117, no. 24 (June 8, 2020): 13821–27. http://dx.doi.org/10.1073/pnas.1922294117.

Full text
Abstract:
Color ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. This limitation on perceptual awareness could not be explained by retinal neuroanatomy or previous studies of peripheral visual processing using more traditional psychophysical approaches. In a second study, we measured color detection thresholds using a staircase procedure while a set of observers intentionally attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. Together, these results show that during active, naturalistic viewing conditions, our intuitive sense of a rich, colorful visual world is largely incorrect.
APA, Harvard, Vancouver, ISO, and other styles
16

Fathima, Naureen. "Diagnosing Chronic Glaucoma Using Watershed and Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 9, no. 9 (September 30, 2021): 272–83. http://dx.doi.org/10.22214/ijraset.2021.37860.

Full text
Abstract:
Abstract: Glaucoma is a disease that relates to the vision of human eye,Glaucoma is a disease that affects the human eye's vision. This sickness is regarded as an irreversible condition that causes eyesight degeneration. One of the most common causes of lifelong blindness is glaucoma in persons over the age of 40. Because of its trade-off between portability, size, and cost, fundus imaging is the most often utilised screening tool for glaucoma detection. Fundus imaging is a two-dimensional (2D) depiction of the three-dimensional (3D), semitransparent retinal tissues projected on to the imaging plane using reflected light. The idea plane that depicts the physical display screen through which a user perceives a virtual 3D scene is referred to as the "image plane”. The bulk of current algorithms for autonomous glaucoma assessment using fundus images rely on handcrafted segmentation-based features, which are influenced by the segmentation method used and the retrieved features. Convolutional neural networks (CNNs) are known for, among other things, their ability to learn highly discriminative features from raw pixel intensities. This work describes a computational technique for detecting glaucoma automatically. The major goal is to use a "image processing technique" to diagnose glaucoma using a fundus image as input. It trains datasets using a convolutional neural network (CNN). The Watershed algorithm is used for segmentation and is the most widely used technique in image processing. The following image processing processes are performed: region of interest, morphological procedures, and segmentation. This technique can be used to determine whether or not a person has Glaucoma. Keywords: Recommender system, item-based collaborative filtering, Natural Language Processing, Deep learning.
APA, Harvard, Vancouver, ISO, and other styles
17

Holmgren, Douglas E., and Warren Robinett. "Scanned Laser Displays for Virtual Reality:A Feasibility Study." Presence: Teleoperators and Virtual Environments 2, no. 3 (January 1993): 171–84. http://dx.doi.org/10.1162/pres.1993.2.3.171.

Full text
Abstract:
Technologies applicable toward a display system in which a laser is raster scanned on the viewer's retina are reviewed. The properties of laser beam propagation and the inherent resolution of a laser scanning system are discussed. Scanning techniques employing rotating mirrors, galvanometer scanners, acoustooptic deflectors, and piezoelectric deflectors are described. Resolution, speed, deflection range, and physical size are strongly coupled properties of these technologies. A radiometric analysis indicates that eye safety would not be a problem in a retina-scanning system. For head-mounted display applications, a monochromatic system employing a laser diode source with acoustooptic and galvanometer scanners is deemed most practical at the present time. A resolution of 1000 × 1000 pixels at 60 frames per second should be possible with such a monochromatic system using currently available off-the-shelf components. A full-color scanned-laser display suitable for head-mounted display use is not judged feasible to build at this time with off-the-shelf components.
APA, Harvard, Vancouver, ISO, and other styles
18

Neveu, Charles F., and Lawrence W. Stark. "The Virtual Lens." Presence: Teleoperators and Virtual Environments 7, no. 4 (August 1998): 370–81. http://dx.doi.org/10.1162/105474698565785.

Full text
Abstract:
We describe a new type of feedback display based upon ocular accommodation, called the virtual lens, that maintains a focused projection of a CRT image on the retina independent of changes in accommodation, and that replaces the optical image-processing action of the crystalline lens with an arbitrary computable image transform. We describe some applications of the virtual lens in visual psychophysics and virtual environments.
APA, Harvard, Vancouver, ISO, and other styles
19

Traina, Giovanna, Gigliola Fontanesi, and Paola Bagnoli. "Maturation of somatostatin immunoreactivity in the pigeon retina: Morphological characterization and quantitative analysis." Visual Neuroscience 11, no. 1 (January 1994): 165–77. http://dx.doi.org/10.1017/s0952523800011202.

Full text
Abstract:
AbstractIn addition to a modulatory function, somatostatin (SS) is likely to exert a morphogenetic and/or trophic role in the developing nervous system. In this study, a mouse monoclonal antibody directed to SS was used to investigate the posthatching development of SS-immunoreactivity (SS-ir) in the pigeon retina to provide a basis for a better understanding of the role of this peptide in retinal maturation. In the adult, SS-ir was observed in amacrine cells located in the inner nuclear layer (INL) of the entire retina. Two cell types were recognized according to their morphology. They showed a differential density distribution. Cell type indicated as “adult 1” (AD1) was characterized by pear-shaped cell bodies with single primary processes directed to the inner plexiform layer (IPL) and was mostly present in the red field. In contrast, cell type indicated as “adult 2” (AD2) was characterized by round-shaped somata with 1–3 primary processes and was highly represented in the fovea and the dorsal periphery. Posthatching maturation of the pigeon retina was characterized by drastic changes in the pattern of SS-ir. Over the first days posthatching, SS-ir was observed in sparsely distributed somata mostly located in the ganglion cell layer (GCL). This cell type indicated as “hatch” (H) was characterized by dense granular staining and became extremely rare at 7 days. Over the same period, growing SS-positive axons displaying enlarged growth cones were found in the optic tract (TrO). These observations suggest the possibility that ganglion cells transiently expressing SS are present at early stages of posthatching development. Of the two types of SS-containing cells observed in the adult, the first to be recognized morphologically was cell type AD1 which appeared at 2 days after hatching in the INL. These cells were virtually adult-like in morphology by 7 days. In contrast, cell type AD2 was not apparent until 7 days posthatching. The density (defined as number of cells/mm2 of retinal tissue) and the total number of SS-containing cells changed during posthatching maturation. In particular, the adult number of cell type AD1 was reached at about 10 days, while the number of cell type AD2 was reached at about 3 weeks posthatching. At this stage, both cell types also displayed their mature density distribution. The present findings suggest a temporal relationship between the maturation of SS-ir and developmental events which include the onset of light-driven activity and the maturation of retinal acuity. Our results also demonstrate significant differences in the pattern of SS-ir between the avian and the mammalian retina and suggest that SS-containing neurons represent important intraretinal association neurons in the retina of birds.
APA, Harvard, Vancouver, ISO, and other styles
20

Teubl, Fernando, Marcio Cabral, Marcelo Zuffo, and Celso Kurashima. "Analysis of a Scalable Multi-Projector System for Virtual Reality Environments." International Journal of Virtual Reality 12, no. 1 (January 1, 2013): 15–29. http://dx.doi.org/10.20870/ijvr.2013.12.1.2855.

Full text
Abstract:
Virtual reality environments with multi-projector systems provide better visual quality, higher resolution and more brightness than traditional single-projector systems. Moreover, using multiple low-cost projectors is economically advantageous in comparison to an expensive high-end projector for equivalent visual performance. This article presents the research and development of a scalable multiprojection system that enables the construction of virtual reality systems with a large number of projectors and graphics computers, and that is capable of achieving a high resolution display. We demonstrate the viability of such system with the development of a camera-based multiprojector system library called FastFusion, which automatically calibrates casually aligned projectors to properly blend different projections. Our system software improves known algorithms in the literature for projector calibration and image blending. As a result, FastFusion improves system scalability and calibration reliability. In a detailed analysis of the visual performance of FastFusion in a CAVE system with three walls, eighteen projectors and nine computers, we achieved a satisfactory result for variance in geometric calibration and for graphics performance. Thus, our library is suitable for building complex projector systems and with retina resolution.
APA, Harvard, Vancouver, ISO, and other styles
21

Bringmann, A., S. Schopf, and A. Reichenbach. "Developmental Regulation of Calcium Channel-Mediated Currents in Retinal Glial (Müller) Cells." Journal of Neurophysiology 84, no. 6 (December 1, 2000): 2975–83. http://dx.doi.org/10.1152/jn.2000.84.6.2975.

Full text
Abstract:
Whole cell voltage-clamp recordings of freshly isolated cells were used to study changes in the currents through voltage-gated Ca2+ channels during the postnatal development of immature radial glial cells into Müller cells of the rabbit retina. Using Ba2+ or Ca2+ ions as charge carriers, currents through transient low-voltage-activated (LVA) Ca2+ channels were recorded in cells from early postnatal stages, with an activation threshold at −60 mV and a peak current at −25 mV. To increase the amplitude of currents through Ca2+ channels, Na+ ions were used as the main charge carriers, and currents were recorded in divalent cation-free bath solutions. Currents through transient LVA Ca2+ channels were found in all radial glial cells from retinae between postnatal days 2 and 37. The currents activated at potentials positive to −80 mV and displayed a maximum at −40 mV. The amplitude of LVA currents increased during the first postnatal week; after postnatal day 6, the amplitude remained virtually constant. The density of LVA currents was highest at early postnatal days (days 2–5: 13 pA/pF) and decreased to a stable, moderate level within the first three postnatal weeks (3 pA/pF). A significant expression of currents through sustained, high-voltage-activated Ca2+ channels was found after the third postnatal week in ∼25% of the investigated cells. The early and sole expression of transient currents at high-density may suggest that LVA Ca2+ channels are involved in early developmental processes of rabbit Müller cells.
APA, Harvard, Vancouver, ISO, and other styles
22

Schietroma, Cataldo, Karine Parain, Amrit Estivalet, Asadollah Aghaie, Jacques Boutet de Monvel, Serge Picaud, José-Alain Sahel, Muriel Perron, Aziz El-Amraoui, and Christine Petit. "Usher syndrome type 1–associated cadherins shape the photoreceptor outer segment." Journal of Cell Biology 216, no. 6 (May 11, 2017): 1849–64. http://dx.doi.org/10.1083/jcb.201612030.

Full text
Abstract:
Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis, these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23, encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15–containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal.
APA, Harvard, Vancouver, ISO, and other styles
23

Guimarães, Marília Zaluar P., and Jan Nora Hokoç. "Tyrosine hydroxylase expression in the Cebus monkey retina." Visual Neuroscience 14, no. 4 (July 1997): 705–15. http://dx.doi.org/10.1017/s0952523800012669.

Full text
Abstract:
AbstractTyrosine hydroxylase (TH) expression was used as a marker to study the dopaminergic cells in the Cebus monkey retina. Two types of dopaminergic cells were identified by cell body size and location, level of arborization in the inner plexiform layer, and amount of immunolabeling. Type 1 cells displayed intense immunoreactivity and larger somata (12–24 μm) located in the inner nuclear layer or ganglion cell layer, whereas type 2 had smaller cell bodies (8–14 μm) found either in the inner plexiform layer or ganglion cell layer and were more faintly labeled. Interplexiform cells were characterized as type 1 dopaminergic cells. Immunoreactive axon-like processes were seen in the nerve fiber layer, and a net of fibers was visible in the foveal pit and in the extreme periphery of the retina. The population of TH+ cells was most numerous in the temporal superior quadrant and its density peaked at 1–2 mm from the fovea. Type 1 TH+ cells were more numerous than type 2 cells at any eccentricity. Along the horizontal meridian, type 1 cell density was slightly higher in temporal (29 cells/mm2) than in nasal (25 cells/mm2) retina, while type 2 cells had a homogeneous distribution (4.5 cells/mm2). Along the vertical meridian, type 1 cells reached lower peak density (average 17.7 cells/mm2) in the inferior retina (central 4 mm), compared to the superior portion (23.7 cells/mm2). Type 2 cell density varied from 4.5 cells/mm2 in the superior region to 9.4 cells/mm2 in the inferior region. The spatial density of the two cell types varied approximately inversely while the total density of TH+ cells was virtually constant across the retina. No correlation between dopaminergic cells and rod distribution was found. However, we suggest that dopaminergic cells could have a role in mesopic and/or photopic vision in this species, since TH+ fibers are present in cone-dominated regions like the foveola and extreme nasal Periphery.
APA, Harvard, Vancouver, ISO, and other styles
24

Carthon, Bradley C., Carola A. Neumann, Manjusri Das, Basil Pawlyk, Tiansen Li, Yan Geng, and Piotr Sicinski. "Genetic Replacement of Cyclin D1 Function in Mouse Development by Cyclin D2." Molecular and Cellular Biology 25, no. 3 (February 1, 2005): 1081–88. http://dx.doi.org/10.1128/mcb.25.3.1081-1088.2005.

Full text
Abstract:
ABSTRACT D cyclins (D1, D2, and D3) are components of the core cell cycle machinery in mammalian cells. It is unclear whether each of the D cyclins performs unique, tissue-specific functions or the three proteins have virtually identical functions and differ mainly in their pattern of expression. We previously generated mice lacking cyclin D1, and we observed that these animals displayed hypoplastic retinas and underdeveloped mammary glands and a presented developmental neurological abnormality. We now asked whether the specific requirement for cyclin D1 in these tissues reflected a unique pattern of D cyclin expression or the presence of specialized functions for cyclin D1 in cyclin D1-dependent compartments. We generated a knock-in strain of mice expressing cyclin D2 in place of D1. Cyclin D2 was able to drive nearly normal development of retinas and mammary glands, and it partially replaced cyclin D1's function in neurological development. We conclude that the differences between these two D cyclins lie mostly in the tissue-specific pattern of their expression. However, we propose that subtle differences between the two D cyclins do exist and they may allow D cyclins to function in a highly optimized fashion. We reason that the acquisition of multiple D cyclins may allow mammalian cells to drive optimal proliferation of a diverse array of cell types.
APA, Harvard, Vancouver, ISO, and other styles
25

Bilotta, Joseph, and Israel Abramov. "Orientation and Direction Tuning of Goldfish Ganglion Cells." Visual Neuroscience 2, no. 1 (January 1989): 3–13. http://dx.doi.org/10.1017/s0952523800004260.

Full text
Abstract:
AbstractOrientation and direction tuning were examined in goldfish ganglion cells by drifting sinusoidal gratings across the receptive field of the cell. Each ganglion cell was first classified as X-, Y- or W-like based on its responses to a contrast-reversal grating positioned at various spatial phases of the cell's receptive field. Sinusoidal gratings were drifted at different orientations and directions across the receptive field of the cell; spatial frequency and contrast of the grating were also varied. It was found that some X-like cells responded similarly to all orientations and directions, indicating that these cells had circular and symmetrical fields. Other X-like cells showed a preference for certain orientations at high spatial frequencies suggesting that these cells possess an elliptical center mechanism (since only the center mechanism is sensitive to high spatial frequencies). In virtually all cases, X-like cells were not directionally tuned. All but one Y-like cell displayed orientation tuning but, as with X-like cells, orientation tuning appeared only at high spatial frequencies. A substantial portion of these Y-like cells also showed a direction preference. This preference was dependent on spatial frequency but in a manner different from orientation tuning, suggesting that these two phenomena result from different mechanisms. All W-like cells possessed orientation and direction tuning, both of which depended on the spatial frequency of the stimulus. These results support past work which suggests that the center and surround components of retinal ganglion cell receptive fields are not necessarily circular or concentric, and that they may actually consist of smaller subareas.
APA, Harvard, Vancouver, ISO, and other styles
26

Sanchez Garcia, Melani, Rubén Martínez-Cantín, Jesús Bermúdez-Cameo, and José J. Guerrero. "Influence of field of view in visual prostheses design: Analysis with a VR system." Jornada de Jóvenes Investigadores del I3A 8 (December 22, 2020). http://dx.doi.org/10.26754/jjii3a.4880.

Full text
Abstract:
In this work, we evaluate the influence of field of view with respect to spatial resolution in visual prostheses. Our system uses a virtual-reality environment based on panoramic scenes and headmounted display which allows users to feel immersed in the scene. Results indicate that, for the design of retinal prostheses, it is better to concentrate the phosphenes in a small area, to maximize the angular resolution, even if that implies sacrificing field of view.
APA, Harvard, Vancouver, ISO, and other styles
27

Chang, Chenliang, Wei Cui, Jongchan Park, and Liang Gao. "Computational holographic Maxwellian near-eye display with an expanded eyebox." Scientific Reports 9, no. 1 (December 2019). http://dx.doi.org/10.1038/s41598-019-55346-w.

Full text
Abstract:
AbstractThe Maxwellian near-eye displays have attracted growing interest in various applications. By using a confined pupil, a Maxwellian display presents an all-in-focus image to the viewer where the image formed on the retina is independent of the optical power of the eye. Despite being a promising technique, current Maxwellian near-eye displays suffer from various limitations such as a small eyebox, a bulky setup and a high cost. To overcome these drawbacks, we present a holographic Maxwellian near-eye display based on computational imaging. By encoding a complex wavefront into amplitude-only signals, we can readily display the computed histogram on a widely-accessible device such as a liquid-crystal or digital light processing display, creating an all-in-focus virtual image augmented on the real-world objects. Additionally, to expand the eyebox, we multiplex the hologram with multiple off-axis plane waves, duplicating the pupils into an array. The resultant method features a compact form factor because it requires only one active electronic component, lending credence to its wearable applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Juno, Stephen Palmisano, Wilson Luu, and Shinichi Iwasaki. "Effects of Linear Visual-Vestibular Conflict on Presence, Perceived Scene Stability and Cybersickness in the Oculus Go and Oculus Quest." Frontiers in Virtual Reality 2 (April 29, 2021). http://dx.doi.org/10.3389/frvir.2021.582156.

Full text
Abstract:
Humans rely on multiple senses to perceive their self-motion in the real world. For example, a sideways linear head translation can be sensed either by lamellar optic flow of the visual scene projected on the retina of the eye or by stimulation of vestibular hair cell receptors found in the otolith macula of the inner ear. Mismatches in visual and vestibular information can induce cybersickness during head-mounted display (HMD) based virtual reality (VR). In this pilot study, participants were immersed in a virtual environment using two recent consumer-grade HMDs: the Oculus Go (3DOF angular only head tracking) and the Oculus Quest (6DOF angular and linear head tracking). On each trial they generated horizontal linear head oscillations along the interaural axis at a rate of 0.5 Hz. This head movement should generate greater sensory conflict when viewing the virtual environment on the Oculus Go (compared to the Quest) due to the absence of linear tracking. We found that perceived scene instability always increased with the degree of linear visual-vestibular conflict. However, cybersickness was not experienced by 7/14 participants, but was experienced by the remaining participants in at least one of the stereoscopic viewing conditions (six of whom also reported cybersickness in monoscopic viewing conditions). No statistical difference in spatial presence was found across conditions, suggesting that participants could tolerate considerable scene instability while retaining the feeling of being there in the virtual environment. Levels of perceived scene instability, spatial presence and cybersickness were found to be similar between the Oculus Go and the Oculus Quest with linear tracking disabled. The limited effect of linear coupling on cybersickness, compared with its strong effect on perceived scene instability, suggests that perceived scene instability may not always be associated with cybersickness. However, perceived scene instability does appear to provide explanatory power over the cybersickness observed in stereoscopic viewing conditions.
APA, Harvard, Vancouver, ISO, and other styles
29

Molina, Camilo A., Frank M. Phillips, Matthew W. Colman, Wilson Z. Ray, Majid Khan, Emanuele Orru’, Kornelis Poelstra, and Larry Khoo. "A cadaveric precision and accuracy analysis of augmented reality–mediated percutaneous pedicle implant insertion." Journal of Neurosurgery: Spine, October 2020, 1–9. http://dx.doi.org/10.3171/2020.6.spine20370.

Full text
Abstract:
OBJECTIVEAugmented reality–mediated spine surgery (ARMSS) is a minimally invasive novel technology that has the potential to increase the efficiency, accuracy, and safety of conventional percutaneous pedicle screw insertion methods. Visual 3D spinal anatomical and 2D navigation images are directly projected onto the operator’s retina and superimposed over the surgical field, eliminating field of vision and attention shift to a remote display. The objective of this cadaveric study was to assess the accuracy and precision of percutaneous ARMSS pedicle implant insertion.METHODSInstrumentation was placed in 5 cadaveric torsos via ARMSS with the xvision augmented reality head-mounted display (AR-HMD) platform at levels ranging from T5 to S1 for a total of 113 total implants (93 pedicle screws and 20 Jamshidi needles). Postprocedural CT scans were graded by two independent neuroradiologists using the Gertzbein-Robbins scale (grades A–E) for clinical accuracy. Technical precision was calculated using superimposition analysis employing the Medical Image Interaction Toolkit to yield angular trajectory (°) and linear screw tip (mm) deviation from the virtual pedicle screw position compared with the actual pedicle screw position on postprocedural CT imaging.RESULTSThe overall implant insertion clinical accuracy achieved was 99.1%. Lumbosacral and thoracic clinical accuracies were 100% and 98.2%, respectively. Specifically, among all implants inserted, 112 were noted to be Gertzbein-Robbins grade A or B (99.12%), with only 1 medial Gertzbein-Robbins grade C breach (> 2-mm pedicle breach) in a thoracic pedicle at T9. Precision analysis of the inserted pedicle screws yielded a mean screw tip linear deviation of 1.98 mm (99% CI 1.74–2.22 mm) and a mean angular error of 1.29° (99% CI 1.11°–1.46°) from the projected trajectory. These data compare favorably with data from existing navigation platforms and regulatory precision requirements mandating that linear and angular deviation be less than 3 mm (p < 0.01) and 3° (p < 0.01), respectively.CONCLUSIONSPercutaneous ARMSS pedicle implant insertion is a technically feasible, accurate, and highly precise method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography