To see the other types of publications on this topic, follow the link: Egocentric vision.

Journal articles on the topic 'Egocentric vision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Egocentric vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Swanston, Michael T., Nicholas J. Wade, and Ross H. Day. "The Representation of Uniform Motion in Vision." Perception 16, no. 2 (1987): 143–59. http://dx.doi.org/10.1068/p160143.

Full text
Abstract:
For veridical detection of object motion any moving detecting system must allocate motion appropriately between itself and objects in space. A model for such allocation is developed for simplified situations (points of light in uniform motion in a frontoparallel plane). It is proposed that motion of objects is registered and represented successively at four levels within frames of reference that are defined by the detectors themselves or by their movements. The four levels are referred to as retinocentric, orbitocentric, egocentric, and geocentric. Thus the retinocentric signal is combined wit
APA, Harvard, Vancouver, ISO, and other styles
2

Pouget, Alexandre, Stephen A. Fisher, and Terrence J. Sejnowski. "Egocentric Spaw Representation in Early Vision." Journal of Cognitive Neuroscience 5, no. 2 (1993): 150–61. http://dx.doi.org/10.1162/jocn.1993.5.2.150.

Full text
Abstract:
Recent physiological experiments have shown that the responses of many neurons in V1 and V3a are modulated by the direction of gaze. We have developed a neural network model of the hierarchy of maps in visual cortex to explore the hypothesis that visual features are encoded in egocentric (spatio-topic) coordinates at early stages of visual processing. Most psychophysical studies that have attempted to examine this question have concluded that features are represented in retinal coordinates, but the interpretation of these experiments does not preclude the type of retinospatiotopic representati
APA, Harvard, Vancouver, ISO, and other styles
3

Alletto, Stefano, Giuseppe Serra, Simone Calderara, and Rita Cucchiara. "Understanding social relationships in egocentric vision." Pattern Recognition 48, no. 12 (2015): 4082–96. http://dx.doi.org/10.1016/j.patcog.2015.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Núñez-Marcos, Adrián, Gorka Azkune, and Ignacio Arganda-Carreras. "Egocentric Vision-based Action Recognition: A survey." Neurocomputing 472 (February 2022): 175–97. http://dx.doi.org/10.1016/j.neucom.2021.11.081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Papadakis, Antonios, and Evaggelos Spyrou. "A Multi-modal Egocentric Activity Recognition Approach towards Video Domain Generalization." Sensors 24, no. 8 (2024): 2491. http://dx.doi.org/10.3390/s24082491.

Full text
Abstract:
Egocentric activity recognition is a prominent computer vision task that is based on the use of wearable cameras. Since egocentric videos are captured through the perspective of the person wearing the camera, her/his body motions severely complicate the video content, imposing several challenges. In this work we propose a novel approach for domain-generalized egocentric human activity recognition. Typical approaches use a large amount of training data, aiming to cover all possible variants of each action. Moreover, several recent approaches have attempted to handle discrepancies between domain
APA, Harvard, Vancouver, ISO, and other styles
6

Demianenko, Demianenko. "ДОСЛІДЖЕННЯ ЕГОЦЕНТРИЧНОГО МОВЛЕННЯ ДІТЕЙ СТАРШОГО ДОШКІЛЬНОГО ВІКУ". Psycholinguistics in a Modern World 15 (25 грудня 2020): 68–71. http://dx.doi.org/10.31470/10.31470/2706-7904-2020-15-68-71.

Full text
Abstract:
Іn this work, one of the types of speech of older preschool children - egocentric - is analyzed. Scientific approaches to the vision of the problem of egocentric speech are clarified. The egocentric speech of older preschoolers was studied, the content of which is connected only with the utterances of children of metalanguage. The obtained empirical data once again convincingly proved the spontaneity of children’s metalanguage and confirmed the need to develop different types of utterances of older preschoolers, to achieve success in communication and adequate utterances, including about units
APA, Harvard, Vancouver, ISO, and other styles
7

Demianenko, Svitlana. "ДОСЛІДЖЕННЯ ЕГОЦЕНТРИЧНОГО МОВЛЕННЯ ДІТЕЙ СТАРШОГО ДОШКІЛЬНОГО ВІКУ". Psycholinguistics in a Modern World 15 (25 грудня 2020): 68–71. http://dx.doi.org/10.31470/2706-7904-2020-15-68-71.

Full text
Abstract:
Іn this work, one of the types of speech of older preschool children - egocentric - is analyzed. Scientific approaches to the vision of the problem of egocentric speech are clarified. The egocentric speech of older preschoolers was studied, the content of which is connected only with the utterances of children of metalanguage. The obtained empirical data once again convincingly proved the spontaneity of children’s metalanguage and confirmed the need to develop different types of utterances of older preschoolers, to achieve success in communication and adequate utterances, including about units
APA, Harvard, Vancouver, ISO, and other styles
8

Girase, Sheetal, and Mangesh Bedekar. "Understanding First-Person and Third-Person Videos in Computer Vision." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9s (2023): 263–71. http://dx.doi.org/10.17762/ijritcc.v11i9s.7420.

Full text
Abstract:
Due to advancements in technology and social media, a large amount of visual information is created. There is a lot of interesting research going on in Computer Vision that takes into consideration either visual information generated by first-person (egocentric) or third-person(exocentric) cameras. Video data generated by YouTubers, Surveillance cameras, and Drones which is referred to as third-person or exocentric video data. Whereas first-person or egocentric is the one which is generated by GoPro cameras and Google Glass. Exocentric view capture wide and global views whereas egocentric view
APA, Harvard, Vancouver, ISO, and other styles
9

Tarnutzer, Alexander A., Christopher J. Bockisch, Itsaso Olasagasti, and Dominik Straumann. "Egocentric and allocentric alignment tasks are affected by otolith input." Journal of Neurophysiology 107, no. 11 (2012): 3095–106. http://dx.doi.org/10.1152/jn.00724.2010.

Full text
Abstract:
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-de
APA, Harvard, Vancouver, ISO, and other styles
10

Coluccia, Emanuele, Irene C. Mammarella, Rossana De Beni, Miriam Ittyerah, and Cesare Cornoldi. "Remembering Object Position in the Absence of Vision: Egocentric, Allocentric, and Egocentric Decentred Frames of Reference." Perception 36, no. 6 (2007): 850–64. http://dx.doi.org/10.1068/p5621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alletto, Stefano, Davide Abati, Giuseppe Serra, and Rita Cucchiara. "Exploring Architectural Details Through a Wearable Egocentric Vision Device." Sensors 16, no. 2 (2016): 237. http://dx.doi.org/10.3390/s16020237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ooi, Sho, Tsuyoshi Ikegaya, and Mutsuo Sano. "Cooking Behavior Recognition Using Egocentric Vision for Cooking Navigation." Journal of Robotics and Mechatronics 29, no. 4 (2017): 728–36. http://dx.doi.org/10.20965/jrm.2017.p0728.

Full text
Abstract:
This paper presents a cooking behavior recognition method for achievement of a cooking navigation system. A cooking navigation system is a system that recognizes the progress of a user in cooking, and accordingly presents an appropriate recipe, thus supporting the activity. In other words, an appropriate recognition of cooking behaviors is required. Among the various cooking behavior recognition methods, such as the use of context with the object being focused on and use of information in the line of sight, we have so far attempted cooking behavior recognition using a method that focuses on th
APA, Harvard, Vancouver, ISO, and other styles
13

Morerio, Pietro, Gabriel Claudiu Georgiu, Lucio Marcenaro, and Carlo Regazzoni. "Optimizing Superpixel Clustering for Real-Time Egocentric-Vision Applications." IEEE Signal Processing Letters 22, no. 4 (2015): 469–73. http://dx.doi.org/10.1109/lsp.2014.2362852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Dimiccoli, Mariella, Cathal Gurrin, David Crandall, Xavier Giró-i-Nieto, and Petia Radeva. "Introduction to the special issue: Egocentric Vision and Lifelogging." Journal of Visual Communication and Image Representation 55 (August 2018): 352–53. http://dx.doi.org/10.1016/j.jvcir.2018.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kutbi, Mohammed, Xiaoxue Du, Yizhe Chang, et al. "Usability Studies of an Egocentric Vision-Based Robotic Wheelchair." ACM Transactions on Human-Robot Interaction 10, no. 1 (2020): 1–23. http://dx.doi.org/10.1145/3399434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Damen, Dima, Hazel Doughty, Giovanni Maria Farinella, et al. "Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100." International Journal of Computer Vision 130, no. 1 (2021): 33–55. http://dx.doi.org/10.1007/s11263-021-01531-2.

Full text
Abstract:
AbstractThis paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (Damen in Scaling egocentric vision: ECCV, 2018), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments)
APA, Harvard, Vancouver, ISO, and other styles
17

Tatler, Benjamin W., and Michael F. Land. "Vision and the representation of the surroundings in spatial memory." Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1564 (2011): 596–610. http://dx.doi.org/10.1098/rstb.2010.0188.

Full text
Abstract:
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour
APA, Harvard, Vancouver, ISO, and other styles
18

Philbeck, John W. "Visually Directed Walking to Briefly Glimpsed Targets is not Biased toward Fixation Location." Perception 29, no. 3 (2000): 259–72. http://dx.doi.org/10.1068/p3036.

Full text
Abstract:
When observers indicate the magnitude of a previously viewed spatial extent by walking without vision to each endpoint, there is little evidence of the perceptual collapse in depth associated with some other methods (eg visual matching). One explanation is that both walking and matching are perceptually mediated, but that the perceived layout is task-dependent. In this view, perceived depth beyond 2–3 m is typically distorted by an equidistance effect, whereby the egocentric distances of nonfixated portions of the depth interval are perceptually pulled toward the fixated point. Action-based re
APA, Harvard, Vancouver, ISO, and other styles
19

Rodin, Ivan, Antonino Furnari, Dimitrios Mavroeidis, and Giovanni Maria Farinella. "Predicting the future from first person (egocentric) vision: A survey." Computer Vision and Image Understanding 211 (October 2021): 103252. http://dx.doi.org/10.1016/j.cviu.2021.103252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ji, Peng, Aiguo Song, Pengwen Xiong, Ping Yi, Xiaonong Xu, and Huijun Li. "Egocentric-Vision based Hand Posture Control System for Reconnaissance Robots." Journal of Intelligent & Robotic Systems 87, no. 3-4 (2016): 583–99. http://dx.doi.org/10.1007/s10846-016-0440-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Raees, Muhammad, Sehat Ullah, and Sami Ur Rahman. "VEN-3DVE: vision based egocentric navigation for 3D virtual environments." International Journal on Interactive Design and Manufacturing (IJIDeM) 13, no. 1 (2018): 35–45. http://dx.doi.org/10.1007/s12008-018-0481-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Koenderink, J. J., A. J. van Doorn, and J. S. Lappin. "Exocentric Directions in Egocentric Space." Perception 25, no. 1_suppl (1996): 86. http://dx.doi.org/10.1068/v96p0115.

Full text
Abstract:
Observers had to direct a pointer (using a radio link for remote control) at some location towards a beacon at another location such that the pointer appeared to point straight at the beacon. Experiments were done in the natural landscape under broad daylight with the subjects using natural (binocular) vision. Distances were in the range of 1 – 24 m. The location of the vantage point was prescribed, but the observers were allowed (indeed needed) to make eye, head, and body movements, including placement of the feet. Only one or two beacons were visible at any time, but positions were taken fro
APA, Harvard, Vancouver, ISO, and other styles
23

Zeller, Michelle, and Wilhelmina Stamps. "Interdisciplinary approach to the treatment of rare visual illusions in a veteran." BMJ Case Reports 14, no. 1 (2021): e238362. http://dx.doi.org/10.1136/bcr-2020-238362.

Full text
Abstract:
Upside-down reversal of vision (UDRV) is a rare form of metamorphopsia, or visual illusions that can distort the size, shape or inclination of objects. This phenomenon is paroxysmal and transient in nature, with patients reporting a sudden inversion of vision in the coronal plane, which typically remains for seconds or minutes, though occasionally persists for hours or days, before returning to normal. Distorted egocentric orientation (ie, the patient perceives the body to be tilted away from the vertical plane) is even more rare as a co-occurring phenomenon. To the best of our knowledge, this
APA, Harvard, Vancouver, ISO, and other styles
24

Besari, Adnan Rachmat Anom, Fernando Ardilla, Azhar Aulia Saputra, Kurnianingsih, Takenori Obo, and Naoyuki Kubota. "Egocentric Behavior Analysis Based on Object Relationship Extraction with Graph Transfer Learning for Cognitive Rehabilitation Support." Journal of Advanced Computational Intelligence and Intelligent Informatics 29, no. 1 (2025): 12–22. https://doi.org/10.20965/jaciii.2025.p0012.

Full text
Abstract:
Recognizing human behavior is essential for early interventions in cognitive rehabilitation, particularly for older adults. Traditional methods often focus on improving third-person vision but overlook the importance of human visual attention during object interactions. This study introduces an egocentric behavior analysis (EBA) framework that uses transfer learning to analyze object relationships. Egocentric vision is used to extract features from hand movements, object detection, and visual attention. These features are then used to validate hand-object interactions (HOI) and describe human
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Ke, Chunyu Xia, Xinyu Zhang, Hao Chen, and Charlie Jianzhong Zhang. "Multimodal Daily-Life Logging in Free-living Environment Using Non-Visual Egocentric Sensors on a Smartphone." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, no. 1 (2024): 1–32. http://dx.doi.org/10.1145/3643553.

Full text
Abstract:
Egocentric non-intrusive sensing of human activities of daily living (ADL) in free-living environments represents a holy grail in ubiquitous computing. Existing approaches, such as egocentric vision and wearable motion sensors, either can be intrusive or have limitations in capturing non-ambulatory actions. To address these challenges, we propose EgoADL, the first egocentric ADL sensing system that uses an in-pocket smartphone as a multi-modal sensor hub to capture body motion, interactions with the physical environment and daily objects using non-visual sensors (audio, wireless sensing, and m
APA, Harvard, Vancouver, ISO, and other styles
26

Nguyen, Thi-Hoa-Cuc, Jean-Christophe Nebel, and Francisco Florez-Revuelta. "Recognition of Activities of Daily Living with Egocentric Vision: A Review." Sensors 16, no. 1 (2016): 72. http://dx.doi.org/10.3390/s16010072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Alletto, Stefano, Giuseppe Serra, and Rita Cucchiara. "Video registration in egocentric vision under day and night illumination changes." Computer Vision and Image Understanding 157 (April 2017): 274–83. http://dx.doi.org/10.1016/j.cviu.2016.09.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Campanella, Francesco, Giulio Sandini, and Maria Concetta Morrone. "Visual information gleaned by observing grasping movement in allocentric and egocentric perspectives." Proceedings of the Royal Society B: Biological Sciences 278, no. 1715 (2010): 2142–49. http://dx.doi.org/10.1098/rspb.2010.2270.

Full text
Abstract:
One of the major functions of vision is to allow for an efficient and active interaction with the environment. In this study, we investigate the capacity of human observers to extract visual information from observation of their own actions, and those of others, from different viewpoints. Subjects discriminated the size of objects by observing a point-light movie of a hand reaching for an invisible object. We recorded real reach-and-grasp actions in three-dimensional space towards objects of different shape and size, to produce two-dimensional ‘point-light display’ movies, which were used to m
APA, Harvard, Vancouver, ISO, and other styles
29

Wexler, Mark. "Voluntary Head Movement and Allocentric Perception of Space." Psychological Science 14, no. 4 (2003): 340–46. http://dx.doi.org/10.1111/1467-9280.14491.

Full text
Abstract:
Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observer's vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary motion in human subjects. The results show that the motor command contributes to the objective perception of space: Observers are more likely to apply, consciously and unconsciously, spatial criteria relative to an allocentric frame of reference when they are executing voluntary head movements than while th
APA, Harvard, Vancouver, ISO, and other styles
30

Martinez-Martin, Ester, Angel P. del Pobil, Manuela Chessa, Fabio Solari, and Silvio P. Sabatini. "An Active System for Visually-Guided Reaching in 3D across Binocular Fixations." Scientific World Journal 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/179391.

Full text
Abstract:
Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity
APA, Harvard, Vancouver, ISO, and other styles
31

Jadhav, Aishwarya, Jeffery Cao, Abhishree Shetty, et al. "AI Guide Dog: Egocentric Path Prediction on Smartphone." Proceedings of the AAAI Symposium Series 5, no. 1 (2025): 220–27. https://doi.org/10.1609/aaaiss.v5i1.35591.

Full text
Abstract:
This paper presents AI Guide Dog (AIGD), a lightweight egocentric (first-person) navigation system for visually impaired users, designed for real-time deployment on smartphones. AIGD employs a vision-only multi-label classification approach to predict directional commands, ensuring safe navigation across diverse environments. We introduce a novel technique for goal-based outdoor navigation by integrating GPS signals and high-level directions, while also handling uncertain multi-path predictions for destination-free indoor navigation. As the first navigation assistance system to handle both goa
APA, Harvard, Vancouver, ISO, and other styles
32

Samiei, Salma, Pejman Rasti, Paul Richard, Gilles Galopin, and David Rousseau. "Toward Joint Acquisition-Annotation of Images with Egocentric Devices for a Lower-Cost Machine Learning Application to Apple Detection." Sensors 20, no. 15 (2020): 4173. http://dx.doi.org/10.3390/s20154173.

Full text
Abstract:
Since most computer vision approaches are now driven by machine learning, the current bottleneck is the annotation of images. This time-consuming task is usually performed manually after the acquisition of images. In this article, we assess the value of various egocentric vision approaches in regard to performing joint acquisition and automatic image annotation rather than the conventional two-step process of acquisition followed by manual annotation. This approach is illustrated with apple detection in challenging field conditions. We demonstrate the possibility of high performance in automat
APA, Harvard, Vancouver, ISO, and other styles
33

Ng, Jing, David Arness, Ashlee Gronowski, et al. "Exocentric and Egocentric Views for Biomedical Data Analytics in Virtual Environments—A Usability Study." Journal of Imaging 10, no. 1 (2023): 3. http://dx.doi.org/10.3390/jimaging10010003.

Full text
Abstract:
Biomedical datasets are usually large and complex, containing biological information about a disease. Computational analytics and the interactive visualisation of such data are essential decision-making tools for disease diagnosis and treatment. Oncology data models were observed in a virtual reality environment to analyse gene expression and clinical data from a cohort of cancer patients. The technology enables a new way to view information from the outside in (exocentric view) and the inside out (egocentric view), which is otherwise not possible on ordinary displays. This paper presents a us
APA, Harvard, Vancouver, ISO, and other styles
34

Anderson, Erin M., Eric S. Seemiller, and Linda B. Smith. "Scene saliencies in egocentric vision and their creation by parents and infants." Cognition 229 (December 2022): 105256. http://dx.doi.org/10.1016/j.cognition.2022.105256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vaca-Castano, Gonzalo, Samarjit Das, Joao P. Sousa, Niels D. Lobo, and Mubarak Shah. "Improved scene identification and object detection on egocentric vision of daily activities." Computer Vision and Image Understanding 156 (March 2017): 92–103. http://dx.doi.org/10.1016/j.cviu.2016.10.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ruggiero, Gennaro, Francesco Ruotolo, and Tina Iachini. "The role of vision in egocentric and allocentric spatial frames of reference." Cognitive Processing 10, S2 (2009): 283–85. http://dx.doi.org/10.1007/s10339-009-0320-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Astuti, Lia, Chui-Hong Chiu, Yu-Chen Lin, and Ming-Chih Lin. "Social-aware trajectory prediction using goal-directed attention networks with egocentric vision." PeerJ Computer Science 11 (April 25, 2025): e2842. https://doi.org/10.7717/peerj-cs.2842.

Full text
Abstract:
This study presents a novel social-goal attention networks (SGANet) model that employs a vision-based multi-stacked neural network framework to predict multiple future trajectories for both homogeneous and heterogeneous road users. Unlike existing methods that focus solely on one dataset type and treat social interactions, temporal dynamics, destination point, and uncertainty behaviors independently, SGANet integrates these components into a unified multimodal prediction framework. A graph attention network (GAT) captures socially-aware interaction correlation, a long short-term memory (LSTM)
APA, Harvard, Vancouver, ISO, and other styles
38

Santarcangelo, Vito, Giovanni Maria Farinella, Antonino Furnari, and Sebastiano Battiato. "Market basket analysis from egocentric videos." Pattern Recognition Letters 112 (September 2018): 83–90. http://dx.doi.org/10.1016/j.patrec.2018.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Shi, Lei, Chen Wang, Zhen Wen, Huamin Qu, Chuang Lin, and Qi Liao. "1.5D Egocentric Dynamic Network Visualization." IEEE Transactions on Visualization and Computer Graphics 21, no. 5 (2015): 624–37. http://dx.doi.org/10.1109/tvcg.2014.2383380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Betancourt, Alejandro, Pietro Morerio, Emilia Barakova, Lucio Marcenaro, Matthias Rauterberg, and Carlo Regazzoni. "Left/right hand segmentation in egocentric videos." Computer Vision and Image Understanding 154 (January 2017): 73–81. http://dx.doi.org/10.1016/j.cviu.2016.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wraga, Maryjane, Sarah H. Creem, and Dennis R. Proffitt. "Perception-Action Dissociations of a Walkable Müller-Lyer Configuration." Psychological Science 11, no. 3 (2000): 239–43. http://dx.doi.org/10.1111/1467-9280.00248.

Full text
Abstract:
These studies examined the role of spatial encoding in inducing perception-action dissociations in visual illusions. Participants were shown a large-scale Müller-Lyer configuration with hoops as its tails. In Experiment 1, participants either made verbal estimates of the extent of the Müller-Lyer shaft (verbal task) or walked the extent without vision, in an offset path (blind-walking task). For both tasks, participants stood a small distance away from the configuration, to elicit object-relative encoding of the shaft with respect to its hoops. A similar illusion bias was found in the verbal a
APA, Harvard, Vancouver, ISO, and other styles
42

Poibrenski, Atanas, Matthias Klusch, Igor Vozniak, and Christian Müller. "Multimodal multi-pedestrian path prediction for autonomous cars." ACM SIGAPP Applied Computing Review 20, no. 4 (2021): 5–17. http://dx.doi.org/10.1145/3447332.3447333.

Full text
Abstract:
Accurate prediction of the future position of pedestrians in traffic scenarios is required for safe navigation of an autonomous vehicle but remains a challenge. This concerns, in particular, the effective and efficient multimodal prediction of most likely trajectories of tracked pedestrians from egocentric view of self-driving car. In this paper, we present a novel solution, named M2P3, which combines a conditional variational autoencoder with recurrent neural network encoder-decoder architecture in order to predict a set of possible future locations of each pedestrian in a traffic scene. The
APA, Harvard, Vancouver, ISO, and other styles
43

Ragusa, Francesco, Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, and Giovanni Maria Farinella. "EGO-CH: Dataset and fundamental tasks for visitors behavioral understanding using egocentric vision." Pattern Recognition Letters 131 (March 2020): 150–57. http://dx.doi.org/10.1016/j.patrec.2019.12.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mu, Jianing, Zixun Wei, Margaret Moulson, Gabriel (Naiqi) Xiao, and Ming Bo Cai. "A computer-vision based approach to co-register frames from egocentric video recordings." Journal of Vision 22, no. 14 (2022): 3954. http://dx.doi.org/10.1167/jov.22.14.3954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Harald Sörqvist, Erik Folke, Adam Michael Altenbuchner, Jörg Krüger, and Bsher Karbouj. "Egocentric expert guidance for procedural activities using Smart Glasses, Computer Vision and RAG." Procedia CIRP 134 (2025): 1047–52. https://doi.org/10.1016/j.procir.2025.02.243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Milotta, Filippo L. M., Antonino Furnari, Sebastiano Battiato, Giovanni Signorello, and Giovanni M. Farinella. "Egocentric visitors localization in natural sites." Journal of Visual Communication and Image Representation 65 (December 2019): 102664. http://dx.doi.org/10.1016/j.jvcir.2019.102664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hùng, Lê Văn. "3D Hand Pose Estimation in Point Cloud Using 3D Convolutional Neural Network on Egocentric Datasets." Journal of Research and Development on Information and Communication Technology 2020, no. 2 (2021): 87–97. http://dx.doi.org/10.32913/mic-ict-research.v2020.n2.936.

Full text
Abstract:
3D hand pose estimation from egocentric vision is an important study in the construction of assistance systems and modeling of robot hand in robotics. In this paper, we propose a complete method for estimating 3D hand posefrom the complex scene data obtained from the egocentric sensor. In which we propose a simple yet highly efficient pre-processing step for hand segmentation. In the estimation process, we used the Hand PointNet (HPN), V2V-PoseNet(V2V), Point-to-Point Regression PointNet (PtoP) for finetuning to estimate the 3D hand pose from the collected data obtained from the egocentric sen
APA, Harvard, Vancouver, ISO, and other styles
48

Ortis, Alessandro, Giovanni M. Farinella, Valeria D’Amico, Luca Addesso, Giovanni Torrisi, and Sebastiano Battiato. "Organizing egocentric videos of daily living activities." Pattern Recognition 72 (December 2017): 207–18. http://dx.doi.org/10.1016/j.patcog.2017.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Cruz, Sergio, and Antoni Chan. "Is that my hand? An egocentric dataset for hand disambiguation." Image and Vision Computing 89 (September 2019): 131–43. http://dx.doi.org/10.1016/j.imavis.2019.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Aghaei, Maedeh, Mariella Dimiccoli, Cristian Canton Ferrer, and Petia Radeva. "Towards social pattern characterization in egocentric photo-streams." Computer Vision and Image Understanding 171 (June 2018): 104–17. http://dx.doi.org/10.1016/j.cviu.2018.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!