Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Reconnaissance d’actions humaines“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Reconnaissance d’actions humaines" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Reconnaissance d’actions humaines"
Beaudry, Cyrille, Renaud Péteri und Laurent Mascarilla. „Analyse multi-échelle de trajectoires de points critiques pour la reconnaissance d’actions humaines“. Traitement du signal 32, Nr. 2-3 (28.10.2015): 265–86. http://dx.doi.org/10.3166/ts.32.265-286.
Der volle Inhalt der QuelleDion, Jocelyne. „Les bibliothèques scolaires québécoises : une évolution en dents de scie“. Documentation et bibliothèques 54, Nr. 2 (26.03.2015): 69–74. http://dx.doi.org/10.7202/1029312ar.
Der volle Inhalt der QuelleMagda, Danièle, und Isabelle Doussan. „Quelle(s) éthique(s) pour les relations hommes-biodiversité ?“ Natures Sciences Sociétés 26, Nr. 1 (Januar 2018): 60–66. http://dx.doi.org/10.1051/nss/2018022.
Der volle Inhalt der QuelleLessard, Geneviève, Stéphanie Demers, Carole Fleuret und Catherine Nadon. „Coéducation des membres de la société d’accueil et des enfants nouveaux arrivants à la reconnaissance réciproque“. Articles 56, Nr. 2-3 (09.02.2023): 59–79. http://dx.doi.org/10.7202/1096445ar.
Der volle Inhalt der QuelleJoy, Jérôme. „Une époque circuitée“. Programmer, Nr. 13 (29.06.2010): 56–76. http://dx.doi.org/10.7202/044040ar.
Der volle Inhalt der QuelleCaloz-Tschopp, Marie-Claire. „Le métissage pris dans les filets du Politique. De quelques paradoxes de la philosophie politique au passage des frontières des États-nations“. II. Le contexte d’accueil : inclusion, exclusion et métissage, Nr. 31 (22.10.2015): 119–31. http://dx.doi.org/10.7202/1033783ar.
Der volle Inhalt der QuelleTamayo Gómez, Camilo. „Communicative Citizenship and Human Rights from a Transnational Perspective“. Emulations - Revue de sciences sociales, Nr. 19 (30.03.2017): 25–49. http://dx.doi.org/10.14428/emulations.019.005.
Der volle Inhalt der QuelleHammad, Manar. „L'Université de Vilnius: exploration sémiotique de l’architecture et des plans“. Semiotika 10 (22.12.2014): 9–115. http://dx.doi.org/10.15388/semiotika.2014.16756.
Der volle Inhalt der QuelleStandaert, Olivier. „Recruter en période de crise ou l’effritement d’un huis clos journalistique“. Sur le journalisme, About journalism, Sobre jornalismo 6, Nr. 1 (15.06.2017): 188–201. http://dx.doi.org/10.25200/slj.v6.n1.2017.299.
Der volle Inhalt der QuelleTaskin, Laurent. „Numéro 37 - février 2006“. Regards économiques, 12.10.2018. http://dx.doi.org/10.14428/regardseco.v1i0.15903.
Der volle Inhalt der QuelleDissertationen zum Thema "Reconnaissance d’actions humaines"
Koperski, Michal. „Reconnaissance d’actions humaines dans des vidéos utilisant une représentation locale“. Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4096/document.
Der volle Inhalt der QuelleThis thesis targets recognition of human actions in videos. This problem can be defined as the ability to name the action that occurs in the video. Due to the complexity of human actions such as appearance and motion pattern variations, many open questions keep action recognition far from being solved. Current state-of-the-art methods achieved satisfactory results based only on local features. To handle complexity of actions we propose 2 methods which model spatio-temporal relationship between features: (1) model a pairwise relationship between features with Brownian Covariance, (2) model spatial-layout of features w.r.t. person bounding box. Our methods are generic and can improve both hand-crafted and deep-learning based methods. Another question is whether 3D information can improve action recognition. Many methods use 3D information only to obtain body joints. We show that 3D information can be used not only for joints detection. We propose a novel descriptor which introduces 3D trajectories computed on RGB-D information. In the evaluation, we focus on daily living actions -- performed by people in daily self-care routine. Recognition of such actions is important for patient monitoring and assistive robots systems. To evaluate our methods we created a large-scale dataset, which consists of 160~hours of video footage of 20~seniors. We have annotated 35 action classes. The actions are performed in un-acted way, thus we introduce real-world challenges, absent in many public datasets. We also evaluated our methods on public datasets: CAD60, CAD120, MSRDailyActivity3D. THe experiments show that our methods improve state-of-the-art results
Biliński, Piotr Tadeusz. „Reconnaissance d’action humaine dans des vidéos“. Thesis, Nice, 2014. http://www.theses.fr/2014NICE4125/document.
Der volle Inhalt der QuelleThis thesis targets the automatic recognition of human actions in videos. Human action recognition is defined as a requirement to determine what human actions occur in videos. This problem is particularly hard due to enormous variations in visual and motion appearance of people and actions, camera viewpoint changes, moving background, occlusions, noise, and enormous amount of video data. Firstly, we review, evaluate, and compare the most popular and the most prominent state-of-the-art techniques, and we propose our action recognition framework based on local features, which we use throughout this thesis work embedding the novel algorithms. Moreover, we introduce a new dataset (CHU Nice Hospital) with daily self care actions of elder patients in a hospital. Then, we propose two local spatio-temporal descriptors for action recognition in videos. The first descriptor is based on a covariance matrix representation, and it models linear relations between low-level features. The second descriptor is based on a Brownian covariance, and it models all kinds of possible relations between low-level features. Then, we propose three higher-level feature representations to go beyond the limitations of the local feature encoding techniques. The first representation is based on the idea of relative dense trajectories. We propose an object-centric local feature representation of motion trajectories, which allows to use the spatial information by a local feature encoding technique. The second representation encodes relations among local features as pairwise features. The main idea is to capture the appearance relations among features (both visual and motion), and use geometric information to describe how these appearance relations are mutually arranged in the spatio-temporal space. The third representation captures statistics of pairwise co-occurring visual words within multi-scale feature-centric neighbourhoods. The proposed contextual features based representation encodes information about local density of features, local pairwise relations among the features, and spatio-temporal order among features. Finally, we show that the proposed techniques obtain better or similar performance in comparison to the state-of-the-art on various, real, and challenging human action recognition datasets (Weizmann, KTH, URADL, MSR Daily Activity 3D, HMDB51, and CHU Nice Hospital)
Calandre, Jordan. „Analyse non intrusive du geste sportif dans des vidéos par apprentissage automatique“. Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS040.
Der volle Inhalt der QuelleIn this thesis, we are interested in the characterization and fine-grained analysis of sports gestures in videos, and more particularly in non-intrusive 3D analysis using a single camera. Our case study is table tennis. We propose a method for reconstructing 3D ball positions using a high-speed calibrated camera (240fps). For this, we propose and train a convolutional network that extracts the apparent diameter of the ball from the images. The knowledge of the real diameter of the ball allows us to compute the distance between the camera and the ball, and then to position the latter in a 3D coordinate system linked to the table. Then, we use a physical model, taking into account the Magnus effect, to estimate the kinematic parameters of the ball from its successive 3D positions. The proposed method segments the trajectories from the impacts of the ball on the table or the racket. This allows, using a physical model of rebound, to refinethe estimates of the kinematic parameters of the ball. It is then possible to compute the racket's speed and orientation after the stroke and to deduce relevant performance indicators. Two databases have been built: the first one is made of real game sequence acquisitions. The second is a synthetic dataset that reproduces the acquisition conditions of the previous one. This allows us to validate our methods as the physical parameters used to generate it are known.Finally, we present our participation to the Sport\&Vision task of the MediaEval challenge on the classification of human actions, using approaches based on the analysis and representation of movement
Devineau, Guillaume. „Deep learning for multivariate time series : from vehicle control to gesture recognition and generation“. Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM037.
Der volle Inhalt der QuelleArtificial intelligence is the scientific field which studies how to create machines that are capable of intelligent behaviour. Deep learning is a family of artificial intelligence methods based on neural networks. In recent years, deep learning has lead to groundbreaking developments in the image and natural language processing fields. However, in many domains, input data consists in neither images nor text documents, but in time series that describe the temporal evolution of observed or computed quantities. In this thesis, we study and introduce different representations for time series, based on deep learning models. Firstly, in the autonomous driving domain, we show that, the analysis of a temporal window by a neural network can lead to better vehicle control results than classical approaches that do not use neural networks, especially in highly-coupled situations. Secondly, in the gesture and action recognition domain, we introduce 1D parallel convolutional neural network models. In these models, convolutions are performed over the temporal dimension, in order for the neural network to detect -and benefit from- temporal invariances. Thirdly, in the human pose motion generation domain, we introduce 2D convolutional generative adversarial neural networks where the spatial and temporal dimensions are convolved in a joint manner. Finally, we introduce an embedding where spatial representations of human poses are sorted in a latent space based on their temporal relationships