Academic literature on the topic 'Egocentric vision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Egocentric vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Egocentric vision"

1

Swanston, Michael T., Nicholas J. Wade, and Ross H. Day. "The Representation of Uniform Motion in Vision." Perception 16, no. 2 (1987): 143–59. http://dx.doi.org/10.1068/p160143.

Full text
Abstract:
For veridical detection of object motion any moving detecting system must allocate motion appropriately between itself and objects in space. A model for such allocation is developed for simplified situations (points of light in uniform motion in a frontoparallel plane). It is proposed that motion of objects is registered and represented successively at four levels within frames of reference that are defined by the detectors themselves or by their movements. The four levels are referred to as retinocentric, orbitocentric, egocentric, and geocentric. Thus the retinocentric signal is combined wit
APA, Harvard, Vancouver, ISO, and other styles
2

Pouget, Alexandre, Stephen A. Fisher, and Terrence J. Sejnowski. "Egocentric Spaw Representation in Early Vision." Journal of Cognitive Neuroscience 5, no. 2 (1993): 150–61. http://dx.doi.org/10.1162/jocn.1993.5.2.150.

Full text
Abstract:
Recent physiological experiments have shown that the responses of many neurons in V1 and V3a are modulated by the direction of gaze. We have developed a neural network model of the hierarchy of maps in visual cortex to explore the hypothesis that visual features are encoded in egocentric (spatio-topic) coordinates at early stages of visual processing. Most psychophysical studies that have attempted to examine this question have concluded that features are represented in retinal coordinates, but the interpretation of these experiments does not preclude the type of retinospatiotopic representati
APA, Harvard, Vancouver, ISO, and other styles
3

Alletto, Stefano, Giuseppe Serra, Simone Calderara, and Rita Cucchiara. "Understanding social relationships in egocentric vision." Pattern Recognition 48, no. 12 (2015): 4082–96. http://dx.doi.org/10.1016/j.patcog.2015.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Núñez-Marcos, Adrián, Gorka Azkune, and Ignacio Arganda-Carreras. "Egocentric Vision-based Action Recognition: A survey." Neurocomputing 472 (February 2022): 175–97. http://dx.doi.org/10.1016/j.neucom.2021.11.081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Papadakis, Antonios, and Evaggelos Spyrou. "A Multi-modal Egocentric Activity Recognition Approach towards Video Domain Generalization." Sensors 24, no. 8 (2024): 2491. http://dx.doi.org/10.3390/s24082491.

Full text
Abstract:
Egocentric activity recognition is a prominent computer vision task that is based on the use of wearable cameras. Since egocentric videos are captured through the perspective of the person wearing the camera, her/his body motions severely complicate the video content, imposing several challenges. In this work we propose a novel approach for domain-generalized egocentric human activity recognition. Typical approaches use a large amount of training data, aiming to cover all possible variants of each action. Moreover, several recent approaches have attempted to handle discrepancies between domain
APA, Harvard, Vancouver, ISO, and other styles
6

Demianenko, Demianenko. "ДОСЛІДЖЕННЯ ЕГОЦЕНТРИЧНОГО МОВЛЕННЯ ДІТЕЙ СТАРШОГО ДОШКІЛЬНОГО ВІКУ". Psycholinguistics in a Modern World 15 (25 грудня 2020): 68–71. http://dx.doi.org/10.31470/10.31470/2706-7904-2020-15-68-71.

Full text
Abstract:
Іn this work, one of the types of speech of older preschool children - egocentric - is analyzed. Scientific approaches to the vision of the problem of egocentric speech are clarified. The egocentric speech of older preschoolers was studied, the content of which is connected only with the utterances of children of metalanguage. The obtained empirical data once again convincingly proved the spontaneity of children’s metalanguage and confirmed the need to develop different types of utterances of older preschoolers, to achieve success in communication and adequate utterances, including about units
APA, Harvard, Vancouver, ISO, and other styles
7

Demianenko, Svitlana. "ДОСЛІДЖЕННЯ ЕГОЦЕНТРИЧНОГО МОВЛЕННЯ ДІТЕЙ СТАРШОГО ДОШКІЛЬНОГО ВІКУ". Psycholinguistics in a Modern World 15 (25 грудня 2020): 68–71. http://dx.doi.org/10.31470/2706-7904-2020-15-68-71.

Full text
Abstract:
Іn this work, one of the types of speech of older preschool children - egocentric - is analyzed. Scientific approaches to the vision of the problem of egocentric speech are clarified. The egocentric speech of older preschoolers was studied, the content of which is connected only with the utterances of children of metalanguage. The obtained empirical data once again convincingly proved the spontaneity of children’s metalanguage and confirmed the need to develop different types of utterances of older preschoolers, to achieve success in communication and adequate utterances, including about units
APA, Harvard, Vancouver, ISO, and other styles
8

Girase, Sheetal, and Mangesh Bedekar. "Understanding First-Person and Third-Person Videos in Computer Vision." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9s (2023): 263–71. http://dx.doi.org/10.17762/ijritcc.v11i9s.7420.

Full text
Abstract:
Due to advancements in technology and social media, a large amount of visual information is created. There is a lot of interesting research going on in Computer Vision that takes into consideration either visual information generated by first-person (egocentric) or third-person(exocentric) cameras. Video data generated by YouTubers, Surveillance cameras, and Drones which is referred to as third-person or exocentric video data. Whereas first-person or egocentric is the one which is generated by GoPro cameras and Google Glass. Exocentric view capture wide and global views whereas egocentric view
APA, Harvard, Vancouver, ISO, and other styles
9

Tarnutzer, Alexander A., Christopher J. Bockisch, Itsaso Olasagasti, and Dominik Straumann. "Egocentric and allocentric alignment tasks are affected by otolith input." Journal of Neurophysiology 107, no. 11 (2012): 3095–106. http://dx.doi.org/10.1152/jn.00724.2010.

Full text
Abstract:
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-de
APA, Harvard, Vancouver, ISO, and other styles
10

Coluccia, Emanuele, Irene C. Mammarella, Rossana De Beni, Miriam Ittyerah, and Cesare Cornoldi. "Remembering Object Position in the Absence of Vision: Egocentric, Allocentric, and Egocentric Decentred Frames of Reference." Perception 36, no. 6 (2007): 850–64. http://dx.doi.org/10.1068/p5621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Egocentric vision"

1

Spera, Emiliano. "Egocentric Vision Based Localization of Shopping Cart." Doctoral thesis, Università di Catania, 2019. http://hdl.handle.net/10761/4139.

Full text
Abstract:
Indoor camera localization from egocentric images is a challenge computer vision problem which has been strongly investigated in the last years. Localizing a camera in a 3D space can open many useful applications in different domains. In this work, we analyse this challenge to localize shopping cart in stores. Three main contributions are given with this thesis. As first, we propose a new dataset for shopping cart localization which includes both RGB and depth images together with the 3-DOF data corresponding to the cart position and orientation in the store. The dataset is also labelled with
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Longfei. "Analysis and Modeling of Machine Operation Tasks using Egocentric Vision." Kyoto University, 2020. http://hdl.handle.net/2433/259046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cartas, Ayala Alejandro. "Recognizing Action and Activities from Egocentric Images." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670752.

Full text
Abstract:
Egocentric action recognition consists in determining what a wearable camera user is doing from his perspective. Its defining characteristic is that the person himself is only partially visible in the images through his hands. As a result, the recognition of actions can rely solely on user interactions with objects, other people, and the scene. Egocentric action recognition has numerous assistive technology applications, in particular in the field of rehabilitation and preventive medicine. The type of egocentric camera determines the activities or actions that can be predicted. There are ro
APA, Harvard, Vancouver, ISO, and other styles
4

Fathi, Alireza. "Learning descriptive models of objects and activities from egocentric video." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48738.

Full text
Abstract:
Recent advances in camera technology have made it possible to build a comfortable, wearable system which can capture the scene in front of the user throughout the day. Products based on this technology, such as GoPro and Google Glass, have generated substantial interest. In this thesis, I present my work on egocentric vision, which leverages wearable camera technology and provides a new line of attack on classical computer vision problems such as object categorization and activity recognition. The dominant paradigm for object and activity recognition over the last decade has been based on usi
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Huiqin. "Registration of egocentric views for collaborative localization in security applications." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG031.

Full text
Abstract:
Cette thèse s’intéresse à la localisation collaborative à partir d’une caméra mobile et d’une caméra statique pour des applications de vidéo-surveillance. Pour la surveillance d’évènements sensibles, la sécurité civile recours de plus en plus à des réseaux de caméras collaboratives incluant des caméras dynamiques et des caméras de surveillance traditionnelles, statiques. Il s’agit, dans des scènes de foules, de localiser d’une part le porteur de la caméra (typiquement agent de sécurité) mais également des évènements observés dans les images, afin de guider les secours par exemple. Cependant, l
APA, Harvard, Vancouver, ISO, and other styles
6

Bettadapura, Vinay Kumar. "Leveraging contextual cues for dynamic scene understanding." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54834.

Full text
Abstract:
Environments with people are complex, with many activities and events that need to be represented and explained. The goal of scene understanding is to either determine what objects and people are doing in such complex and dynamic environments, or to know the overall happenings, such as the highlights of the scene. The context within which the activities and events unfold provides key insights that cannot be derived by studying the activities and events alone. \emph{In this thesis, we show that this rich contextual information can be successfully leveraged, along with the video data, to support
APA, Harvard, Vancouver, ISO, and other styles
7

BALLESTIN, GIORGIO. "A Registration Framework for the Comparison of Video and Optical See-Through Devices in Interactive Augmented Reality." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1046985.

Full text
Abstract:
Augmented Reality (AR) is a technology that has been growing in interest in the past decade. Many factors are currently hindering its distribution to the general public. The heavy processing requirements, hardware limitations, and the need for an easy to use portable/wearable device are still problems to be addressed. The AR field is currently split between the use of widespread devices (smartphones) for easily deployable applications, and the use of high-end Head Mounted Displays (HMD), which are generally very expensive, and often require a cumbersome tethered high-end computer rig to the si
APA, Harvard, Vancouver, ISO, and other styles
8

(6848807), Biao Ma. "Improving the Utility of Egocentric Videos." Thesis, 2019.

Find full text
Abstract:
<div>For either entertainment or documenting purposes, people are starting to record their life using egocentric cameras, mounted on either a person or a vehicle. Our target is to improve the utility of these egocentric videos. </div><div><br></div><div>For egocentric videos with an entertainment purpose, we aim to enhance the viewing experience to improve overall enjoyment. We focus on First-Person Videos (FPVs), which are recorded by wearable cameras. People record FPVs in order to share their First-Person Experience (FPE). However, raw FPVs are usually too shaky to watch, which ruins the ex
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Egocentric vision"

1

Pitti, Bonaccorso. Ricordi. Edited by Veronica Vestri. Firenze University Press, 2015. http://dx.doi.org/10.36253/978-88-6655-726-5.

Full text
Abstract:
Mercante, uomo politico e ambasciatore cui la Repubblica di Firenze affidò difficili missioni presso i re di Francia e l’imperatore, Bonaccorso Pitti (1354-1432) ha a lungo affascinato studiosi sia italiani sia stranieri. Sedotti dalla sua poliedrica, passionale e spesso egocentrica personalità, molti fra loro hanno visto in Bonaccorso un precursore – fra gli altri – di Benvenuto Cellini e Casanova. La sua abilità nel gioco d’azzardo, ad esempio, o la spinta autobiografica che anima molte delle sue pagine sono state così enfatizzate a scapito di una lettura globale sia dei Ricordi sia del cara
APA, Harvard, Vancouver, ISO, and other styles
2

Corazza, Eros. On the essentiality of thoughts (and reference). Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198786658.003.0012.

Full text
Abstract:
It is often assumed that experiential reference, in particular the references we make using so-called essential indexicals (I, here, and now), is irreducible to other forms or reference. In focusing on Donnellan’s insights concerning the referential use of definite descriptions and empirical evidence coming from cognitive sciences (in particular Pylyshin’s work on situated vision), Eros Corazza discusses and defends this view. In so doing, he shows how experiential reference rests on a form of egocentric immersion underpinning agent-centered behaviours. It is further argued that our capacity t
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Egocentric vision"

1

Damen, Dima, Hazel Doughty, Giovanni Maria Farinella, et al. "Scaling Egocentric Vision: The Dataset." In Computer Vision – ECCV 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Su, Yu-Chuan, and Kristen Grauman. "Detecting Engagement in Egocentric Video." In Computer Vision – ECCV 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46454-1_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Behera, Ardhendu, David C. Hogg, and Anthony G. Cohn. "Egocentric Activity Monitoring and Recovery." In Computer Vision – ACCV 2012. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37431-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Poleg, Yair, Chetan Arora, and Shmuel Peleg. "Head Motion Signatures from Egocentric Videos." In Computer Vision -- ACCV 2014. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16811-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ganesan, Vithya, P. Ramadoss, P. Rajarajeswari, J. Naren, and S. HemaSiselee. "Egocentric Vision for Dog Behavioral Analysis." In Lecture Notes in Electrical Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8752-8_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Smith, Justin S., Shiyu Feng, Fanzhe Lyu, and Patricio A. Vela. "Real-Time Egocentric Navigation Using 3D Sensing." In Machine Vision and Navigation. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22587-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shen, Yang, Bingbing Ni, Zefan Li, and Ning Zhuang. "Egocentric Activity Prediction via Event Modulated Attention." In Computer Vision – ECCV 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01216-8_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Song, Sibo, Vijay Chandrasekhar, Ngai-Man Cheung, Sanath Narayan, Liyuan Li, and Joo-Hwee Lim. "Activity Recognition in Egocentric Life-Logging Videos." In Computer Vision - ACCV 2014 Workshops. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16634-6_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Santarcangelo, Vito, Giovanni Maria Farinella, and Sebastiano Battiato. "Egocentric Vision for Visual Market Basket Analysis." In Lecture Notes in Computer Science. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46604-0_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ardeshir, Shervin, and Ali Borji. "Ego2Top: Matching Viewers in Egocentric and Top-View Videos." In Computer Vision – ECCV 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46454-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Egocentric vision"

1

Li, Gen, Kaifeng Zhao, Siwei Zhang, et al. "EgoGen: An Egocentric Synthetic Data Generator." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Darkhalil, Ahmad, Rhodri Guerrier, Adam W. Harley, and Dima Damen. "EgoPoints: Advancing Point Tracking for Egocentric Videos." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Yunhan, Haoyu Ma, Shu Kong, and Charless Fowlkes. "Instance Tracking in 3D Scenes from Egocentric Videos." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Akada, Hiroyasu, Jian Wang, Vladislav Golyanik, and Christian Theobalt. "3D Human Pose Perception from Egocentric Stereo Videos." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rai, Aashish, and Srinath Sridhar. "EgoSonics: Generating Synchronized Audio for Silent Egocentric Videos." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Escobar, Maria, Juanita Puentes, Cristhian Forigua, Jordi Pont-Tuset, Kevis-Kokitsi Maninis, and Pablo Arbelaez. "EgoCast: Forecasting Egocentric Human Pose in the Wild." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Iwata, Daiki, Kanji Tanaka, Mitsuki Yoshida, Ryogo Yamamoto, Yuudai Morishita, and Tomoe Hiroki. "Fine-Grained Self-Localization from Coarse Egocentric Topological Maps." In 20th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013098000003912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Azam, Md Mushfiqur, and Kevin Desai. "A Survey on 3D Egocentric Human Pose Estimation." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fathi, Alireza, Ali Farhadi, and James M. Rehg. "Understanding egocentric activities." In 2011 IEEE International Conference on Computer Vision (ICCV). IEEE, 2011. http://dx.doi.org/10.1109/iccv.2011.6126269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thapar, Daksh, Aditya Nigam, and Chetan Arora. "Anonymizing Egocentric Videos." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!