Academic literature on the topic '3D Human Pose Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D Human Pose Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3D Human Pose Estimation"

1

Wei, Guoqiang, Cuiling Lan, Wenjun Zeng, and Zhibo Chen. "View Invariant 3D Human Pose Estimation." IEEE Transactions on Circuits and Systems for Video Technology 30, no. 12 (December 2020): 4601–10. http://dx.doi.org/10.1109/tcsvt.2019.2928813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bao, Wenxia, Zhongyu Ma, Dong Liang, Xianjun Yang, and Tao Niu. "Pose ResNet: 3D Human Pose Estimation Based on Self-Supervision." Sensors 23, no. 6 (March 12, 2023): 3057. http://dx.doi.org/10.3390/s23063057.

Full text
Abstract:
The accurate estimation of a 3D human pose is of great importance in many fields, such as human–computer interaction, motion recognition and automatic driving. In view of the difficulty of obtaining 3D ground truth labels for a dataset of 3D pose estimation techniques, we take 2D images as the research object in this paper, and propose a self-supervised 3D pose estimation model called Pose ResNet. ResNet50 is used as the basic network for extract features. First, a convolutional block attention module (CBAM) was introduced to refine selection of significant pixels. Then, a waterfall atrous spatial pooling (WASP) module is used to capture multi-scale contextual information from the extracted features to increase the receptive field. Finally, the features are input into a deconvolution network to acquire the volume heat map, which is later processed by a soft argmax function to obtain the coordinates of the joints. In addition to the two learning strategies of transfer learning and synthetic occlusion, a self-supervised training method is also used in this model, in which the 3D labels are constructed by the epipolar geometry transformation to supervise the training of the network. Without the need for 3D ground truths for the dataset, accurate estimation of the 3D human pose can be realized from a single 2D image. The results show that the mean per joint position error (MPJPE) is 74.6 mm without the need for 3D ground truth labels. Compared with other approaches, the proposed method achieves better results.
APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, Hung-Cuong, Thi-Hao Nguyen, Rafal Scherer, and Van-Hung Le. "Unified End-to-End YOLOv5-HR-TCM Framework for Automatic 2D/3D Human Pose Estimation for Real-Time Applications." Sensors 22, no. 14 (July 20, 2022): 5419. http://dx.doi.org/10.3390/s22145419.

Full text
Abstract:
Three-dimensional human pose estimation is widely applied in sports, robotics, and healthcare. In the past five years, the number of CNN-based studies for 3D human pose estimation has been numerous and has yielded impressive results. However, studies often focus only on improving the accuracy of the estimation results. In this paper, we propose a fast, unified end-to-end model for estimating 3D human pose, called YOLOv5-HR-TCM (YOLOv5-HRet-Temporal Convolution Model). Our proposed model is based on the 2D to 3D lifting approach for 3D human pose estimation while taking care of each step in the estimation process, such as person detection, 2D human pose estimation, and 3D human pose estimation. The proposed model is a combination of best practices at each stage. Our proposed model is evaluated on the Human 3.6M dataset and compared with other methods at each step. The method achieves high accuracy, not sacrificing processing speed. The estimated time of the whole process is 3.146 FPS on a low-end computer. In particular, we propose a sports scoring application based on the deviation angle between the estimated 3D human posture and the standard (reference) origin. The average deviation angle evaluated on the Human 3.6M dataset (Protocol #1–Pro #1) is 8.2 degrees.
APA, Harvard, Vancouver, ISO, and other styles
4

Yin, He, Chang Lv, and Yeqin Shao. "3D Human Pose Estimation Based on Transformer." Journal of Physics: Conference Series 2562, no. 1 (August 1, 2023): 012067. http://dx.doi.org/10.1088/1742-6596/2562/1/012067.

Full text
Abstract:
Abstract Currently, 3D human pose estimation has gradually been a well-liked subject. Although various models based on the deep neural network have produced an excellent performance, they still suffer from the ignorance of multiple feasible pose solutions and the problem of the relatively-fixed input length. To solve these issues, a coordinate transformer encoder based on a 2D pose is constructed to generate multiple feasible pose solutions, and multi-to-one pose mapping is employed to generate a reliable pose. A temporal transformer encoder is used to exploit the temporal dependencies of consecutive pose sequences, which avoids the issue of relatively-fixed input length caused by temporal dilated convolution. Adequate experiments indicate that our model achieves a promising performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Jiang, Longkui, Yuru Wang, and Weijia Li. "Regress 3D human pose from 2D skeleton with kinematics knowledge." Electronic Research Archive 31, no. 3 (2023): 1485–97. http://dx.doi.org/10.3934/era.2023075.

Full text
Abstract:
<abstract> <p>3D human pose estimation is a hot topic in the field of computer vision. It provides data support for tasks such as pose recognition, human tracking and action recognition. Therefore, it is widely applied in the fields of advanced human-computer interaction, intelligent monitoring and so on. Estimating 3D human pose from a single 2D image is an ill-posed problem and is likely to cause low prediction accuracy, due to the problems of self-occlusion and depth ambiguity. This paper developed two types of human kinematics to improve the estimation accuracy. First, taking the 2D human body skeleton sequence obtained by the 2D human body pose detector as input, a temporal convolutional network is proposed to develop the movement periodicity in temporal domain. Second, geometrical prior knowledge is introduced into the model to constrain the estimated pose to fit the general kinematics knowledge. The experiments are tested on Human3.6M and MPII (Max Planck Institut Informatik) Human Pose (MPI-INF-3DHP) datasets, and the proposed model shows better generalization ability compared with the baseline and the state-of-the-art models.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Jun, Mantao Wang, Xin Zhao, and Dejun Zhang. "Multi-View Pose Generator Based on Deep Learning for Monocular 3D Human Pose Estimation." Symmetry 12, no. 7 (July 4, 2020): 1116. http://dx.doi.org/10.3390/sym12071116.

Full text
Abstract:
In this paper, we study the problem of monocular 3D human pose estimation based on deep learning. Due to single view limitations, the monocular human pose estimation cannot avoid the inherent occlusion problem. The common methods use the multi-view based 3D pose estimation method to solve this problem. However, single-view images cannot be used directly in multi-view methods, which greatly limits practical applications. To address the above-mentioned issues, we propose a novel end-to-end 3D pose estimation network for monocular 3D human pose estimation. First, we propose a multi-view pose generator to predict multi-view 2D poses from the 2D poses in a single view. Secondly, we propose a simple but effective data augmentation method for generating multi-view 2D pose annotations, on account of the existing datasets (e.g., Human3.6M, etc.) not containing a large number of 2D pose annotations in different views. Thirdly, we employ graph convolutional network to infer a 3D pose from multi-view 2D poses. From experiments conducted on public datasets, the results have verified the effectiveness of our method. Furthermore, the ablation studies show that our method improved the performance of existing 3D pose estimation networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jinbao, Shujie Tan, Xiantong Zhen, Shuo Xu, Feng Zheng, Zhenyu He, and Ling Shao. "Deep 3D human pose estimation: A review." Computer Vision and Image Understanding 210 (September 2021): 103225. http://dx.doi.org/10.1016/j.cviu.2021.103225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Jianzhai, Dewen Hu, Fengtao Xiang, Xingsheng Yuan, and Jiongming Su. "3D human pose estimation by depth map." Visual Computer 36, no. 7 (September 3, 2019): 1401–10. http://dx.doi.org/10.1007/s00371-019-01740-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Jong-Wook, Jin-Young Choi, Eun-Ju Ha, and Jae-Ho Choi. "Human Pose Estimation Using MediaPipe Pose and Optimization Method Based on a Humanoid Model." Applied Sciences 13, no. 4 (February 20, 2023): 2700. http://dx.doi.org/10.3390/app13042700.

Full text
Abstract:
Seniors who live alone at home are at risk of falling and injuring themselves and, thus, may need a mobile robot that monitors and recognizes their poses automatically. Even though deep learning methods are actively evolving in this area, they have limitations in estimating poses that are absent or rare in training datasets. For a lightweight approach, an off-the-shelf 2D pose estimation method, a more sophisticated humanoid model, and a fast optimization method are combined to estimate joint angles for 3D pose estimation. As a novel idea, the depth ambiguity problem of 3D pose estimation is solved by adding a loss function deviation of the center of mass from the center of the supporting feet and penalty functions concerning appropriate joint angle rotation range. To verify the proposed pose estimation method, six daily poses were estimated with a mean joint coordinate difference of 0.097 m and an average angle difference per joint of 10.017 degrees. In addition, to confirm practicality, videos of exercise activities and a scene of a person falling were filmed, and the joint angle trajectories were produced as the 3D estimation results. The optimized execution time per frame was measured at 0.033 s on a single-board computer (SBC) without GPU, showing the feasibility of the proposed method as a real-time system.
APA, Harvard, Vancouver, ISO, and other styles
10

Xia, Hailun, and Tianyang Zhang. "Self-Attention Network for Human Pose Estimation." Applied Sciences 11, no. 4 (February 18, 2021): 1826. http://dx.doi.org/10.3390/app11041826.

Full text
Abstract:
Estimating the positions of human joints from monocular single RGB images has been a challenging task in recent years. Despite great progress in human pose estimation with convolutional neural networks (CNNs), a central problem still exists: the relationships and constraints, such as symmetric relations of human structures, are not well exploited in previous CNN-based methods. Considering the effectiveness of combining local and nonlocal consistencies, we propose an end-to-end self-attention network (SAN) to alleviate this issue. In SANs, attention-driven and long-range dependency modeling are adopted between joints to compensate for local content and mine details from all feature locations. To enable an SAN for both 2D and 3D pose estimations, we also design a compatible, effective and general joint learning framework to mix up the usage of different dimension data. We evaluate the proposed network on challenging benchmark datasets. The experimental results show that our method has significantly achieved competitive results on Human3.6M, MPII and COCO datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "3D Human Pose Estimation"

1

Budaraju, Sri Datta. "Unsupervised 3D Human Pose Estimation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291435.

Full text
Abstract:
The thesis proposes an unsupervised representation learning method to predict 3D human pose from a 2D skeleton via a VAEGAN (Variational Autoencoder Generative Adversarial Network) hybrid network. The method learns to lift poses from 2D to 3D using selfsupervision and adversarial learning techniques. The method does not use images, heatmaps, 3D pose annotations, paired/unpaired 2Dto3D skeletons, 3D priors, synthetic 2D skeletons, multiview or temporal information in any shape or form. The 2D skeleton input is taken by a VAE that encodes it in a latent space and then decodes that latent representation to a 3D pose. The 3D pose is then reprojected to 2D for a constrained, selfsupervised optimization using the input 2D pose. Parallelly, the 3D pose is also randomly rotated and reprojected to 2D to generate a ’novel’ 2D view for unconstrained adversarial optimization using a discriminator network. The combination of the optimizations of the original and the novel 2D views of the predicted 3D pose results in a ’realistic’ 3D pose generation. The thesis shows that the encoding and decoding process of the VAE addresses the major challenge of erroneous and incomplete skeletons from 2D detection networks as inputs and that the variance of the VAE can be altered to get various plausible 3D poses for a given 2D input. Additionally, the latent representation could be used for crossmodal training and many downstream applications. The results on Human3.6M datasets outperform previous unsupervised approaches with less model complexity while addressing more hurdles in scaling the task to the real world.
Uppsatsen föreslår en oövervakad metod för representationslärande för att förutsäga en 3Dpose från ett 2D skelett med hjälp av ett VAE GAN (Variationellt Autoenkodande Generativt Adversariellt Nätverk) hybrid neuralt nätverk. Metoden lär sig att utvidga poser från 2D till 3D genom att använda självövervakning och adversariella inlärningstekniker. Metoden använder sig vare sig av bilder, värmekartor, 3D poseannotationer, parade/oparade 2D till 3D skelett, a priori information i 3D, syntetiska 2Dskelett, flera vyer, eller tidsinformation. 2Dskelettindata tas från ett VAE som kodar det i en latent rymd och sedan avkodar den latenta representationen till en 3Dpose. 3D posen är sedan återprojicerad till 2D för att genomgå begränsad, självövervakad optimering med hjälp av den tvådimensionella posen. Parallellt roteras dessutom 3Dposen slumpmässigt och återprojiceras till 2D för att generera en ny 2D vy för obegränsad adversariell optimering med hjälp av ett diskriminatornätverk. Kombinationen av optimeringarna av den ursprungliga och den nya 2Dvyn av den förutsagda 3Dposen resulterar i en realistisk 3Dposegenerering. Resultaten i uppsatsen visar att kodningsoch avkodningsprocessen av VAE adresserar utmaningen med felaktiga och ofullständiga skelett från 2D detekteringsnätverk som indata och att variansen av VAE kan modifieras för att få flera troliga 3D poser för givna 2D indata. Dessutom kan den latenta representationen användas för crossmodal träning och flera nedströmsapplikationer. Resultaten på datamängder från Human3.6M är bättre än tidigare oövervakade metoder med mindre modellkomplexitet samtidigt som de adresserar flera hinder för att skala upp uppgiften till verkliga tillämpningar.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jianquan. "A Human Kinetic Dataset and a Hybrid Model for 3D Human Pose Estimation." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41437.

Full text
Abstract:
Human pose estimation represents the skeleton of a person in color or depth images to improve a machine’s understanding of human movement. 3D human pose estimation uses a three-dimensional skeleton to represent the human body posture, which is more stereoscopic than a two-dimensional skeleton. Therefore, 3D human pose estimation can enable machines to play a role in physical education and health recovery, reducing labor costs and the risk of disease transmission. However, the existing datasets for 3D pose estimation do not involve fast motions that would cause optical blur for a monocular camera but would allow the subjects’ limbs to move in a more extensive range of angles. The existing models cannot guarantee both real-time performance and high accuracy, which are essential in physical education and health recovery applications. To improve real-time performance, researchers have tried to minimize the size of the model and have studied more efficient deployment methods. To improve accuracy, researchers have tried to use heat maps or point clouds to represent features, but this increases the difficulty of model deployment. To address the lack of datasets that include fast movements and easy-to-deploy models, we present a human kinetic dataset called the Kivi dataset and a hybrid model that combines the benefits of a heat map-based model and an end-to-end model for 3D human pose estimation. We describe the process of data collection and cleaning in this thesis. Our proposed Kivi dataset contains large-scale movements of humans. In the dataset, 18 joint points represent the human skeleton. We collected data from 12 people, and each person performed 38 sets of actions. Therefore, each frame of data has a corresponding person and action label. We design a preliminary model and propose an improved model to infer 3D human poses in real time. When validating our method on the Invariant Top-View (ITOP) dataset, we found that compared with the initial model, our improved model improves the mAP@10cm by 29%. When testing on the Kivi dataset, our improved model improves the mAP@10cm by 15.74% compared to the preliminary model. Our improved model can reach 65.89 frames per second (FPS) on the TensorRT platform.
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Wenjuan. "3D Motion Data aided Human Action Recognition and Pose Estimation." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/116189.

Full text
Abstract:
En aquest treball s’explora el reconeixement d’accions humanes i l'estimació de la seva postura en seqüències d'imatges. A diferència de les tècniques tradicionals d’aprenentatge a partir d’imatges 2D o vídeo amb la sortida anotada, en aquesta Tesi abordem aquest objectiu amb la informació de moviment 3D capturat, que ens ajudar a tancar el llaç entre les característiques 2D de la imatge i les interpretacions sobre el moviment humà.
En este trabajo se exploran el reconocimiento de acciones humanas y la estimación de su postura en secuencias de imágenes. A diferencia de las técnicas tradicionales de aprendizaje a partir de imágenes 2D o vídeo con la salida anotada, en esta Tesis abordamos este objetivo con la información de movimiento 3D capturado, que nos ayudar a cerrar el lazo entre las caracteríssticas 2D de la imagen y las interpretaciones sobre el movimiento humano.
In this work, we explore human action recognition and pose estimation problems. Different from traditional works of learning from 2D images or video sequences and their annotated output, we seek to solve the problems with additional 3D motion capture information, which helps to fill the gap between 2D image features and human interpretations.
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Tsz-Ho. "Classification and pose estimation of 3D shapes and human actions." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Darby, John. "3D Human Motion Tracking and Pose Estimation using Probabilistic Activity Models." Thesis, Manchester Metropolitan University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523145.

Full text
Abstract:
This thesis presents work on generative approaches to human motion tracking and pose estimation where a geometric model of the human body is used for comparison with observations. The existing generative tracking literature can be quite clearly divided between two groups. First, approaches that attempt to solve a difficult high-dimensional inference problem in the body model's full or ambient pose space, recovering freeform or unknown activity. Second, approaches that restrict inference to a low-dimensional latent embedding of the full pose space, recovering activity for which training data is available or known activity. Significant advances have been made in each of these subgroups. Given sufficiently rich multiocular observations and plentiful computational resources, high dimensional approaches have been proven to track fast and complex unknown activities robustly. Conversely, low-dimensional approaches have been able to support monocular tracking and to significantly reduce computational costs for the recovery of known activity. However, their competing advantages have - although complementary - remained disjoint. The central aim of this thesis is to combine low- and high-dimensional generative tracking techniques to benefit from the best of both approaches.First, a simple generative tracking approach is proposed for tracking known activities in a latent pose space using only monocular or binocular observations.A hidden Markov model (HMM) is used to provide dynamics and constrain a particle-based search for poses. The ability of the HMM to classify as well as synthesise poses means that the approach naturally extends to the modelling of a number of different known activities in a single joint-activity latent space.Second, an additional low-dimensional approach is introduced to permit transitions between segmented known activity training data by allowing particles to move between activity manifolds. Both low-dimensional approaches are then fairly and efficiently combined with a simultaneous high-dimensional generative tracking task in the ambient pose space. This combination allows for the recovery of sequences containing multiple known and unknown human activities at an appropriate ( dynamic) computational cost. Finally, a rich hierarchical embedding of the ambient pose space is investigated.This representation allows inference to progress from a single full-body or global non-linear latent pose space, through a number of gradually smaller part-based latent models, to the full ambient pose space. By preserving long-range correlations present in training data, the positions of occluded limbs can be inferred during tracking. Alternatively, by breaking the implied coordination between part-based models novel activity combinations, or composite activity, may be recovered.
APA, Harvard, Vancouver, ISO, and other styles
6

Borodulina, A. (Anastasiia). "Application of 3D human pose estimation for motion capture and character animation." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201906262670.

Full text
Abstract:
Abstract. Interest in motion capture (mocap) technology is growing every day, and the number of possible applications is multiplying. But such systems are very expensive and are not affordable for personal use. Based on that, this thesis presents the framework that can produce mocap data from regular RGB video and then use it to animate a 3D character according to the movement of the person in the original video. To extract the mocap data from the input video, one of the three 3D pose estimation (PE) methods that are available within the scope of the project is used to determine where the joints of the person in each video frame are located in the 3D space. The 3D positions of the joints are used as mocap data and are imported to Blender which contains a simple 3D character. The data is assigned to the corresponding joints of the character to animate it. To test how the created animation will be working in a different environment, it was imported to the Unity game engine and applied to the native 3D character. The evaluation of the produced animations from Blender and Unity showed that even though the quality of the animation might be not perfect, the test subjects found this approach to animation promising. In addition, during the evaluation, a few issues were discovered and considered for future framework development.
APA, Harvard, Vancouver, ISO, and other styles
7

Burenius, Magnus. "Human 3D Pose Estimation in the Wild : using Geometrical Models and Pictorial Structures." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mehta, Dushyant [Verfasser]. "Real-time 3D human body pose estimation from monocular RGB input / Dushyant Mehta." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1220691135/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Norman, Jacob. "3D POSE ESTIMATION IN THE CONTEXT OF GRIP POSITION FOR PHRI." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55166.

Full text
Abstract:
For human-robot interaction with the intent to grip a human arm, it is necessary that the ideal gripping location can be identified. In this work, the gripping location is situated on the arm and thus it can be extracted using the position of the wrist and elbow joints. To achieve this human pose estimation is proposed as there exist robust methods that work both in and outside of lab environments. One such example is OpenPose which thanks to the COCO and MPII datasets has recorded impressive results in a variety of different scenarios in real-time. However, most of the images in these datasets are taken from a camera mounted at chest height on people that for the majority of the images are oriented upright. This presents the potential problem that prone humans which are the primary focus of this project can not be detected. Especially if seen from an angle that makes the human appear upside down in the camera frame. To remedy this two different approaches were tested, both aimed at creating a rotation-invariant 2D pose estimation method. The first method rotates the COCO training data in an attempt to create a model that can find humans regardless of orientation in the image. The second approach adds a RotationNet as a preprocessing step to correctly orient the images so that OpenPose can be used to estimate the 2D pose before rotating back the resulting skeletons.
APA, Harvard, Vancouver, ISO, and other styles
10

Fathollahi, Ghezelghieh Mona. "Estimation of Human Poses Categories and Physical Object Properties from Motion Trajectories." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6835.

Full text
Abstract:
Despite the impressive advancements in people detection and tracking, safety is still a key barrier to the deployment of autonomous vehicles in urban environments [1]. For example, in non-autonomous technology, there is an implicit communication between the people crossing the street and the driver to make sure they have communicated their intent to the driver. Therefore, it is crucial for the autonomous car to infer the future intent of the pedestrian quickly. We believe that human body orientation with respect to the camera can help the intelligent unit of the car to anticipate the future movement of the pedestrians. To further improve the safety of pedestrians, it is important to recognize whether they are distracted, carrying a baby, or pushing a shopping cart. Therefore, estimating the fine- grained 3D pose, i.e. (x,y,z)-coordinates of the body joints provides additional information for decision-making units of driverless cars. In this dissertation, we have proposed a deep learning-based solution to classify the categorized body orientation in still images. We have also proposed an efficient framework based on our body orientation classification scheme to estimate human 3D pose in monocular RGB images. Furthermore, we have utilized the dynamics of human motion to infer the body orientation in image sequences. To achieve this, we employ a recurrent neural network model to estimate continuous body orientation from the trajectories of body joints in the image plane. The proposed body orientation and 3D pose estimation framework are tested on the largest 3D pose estimation benchmark, Human3.6m (both in still images and video), and we have proved the efficacy of our approach by benchmarking it against the state-of-the-art approaches. Another critical feature of self-driving car is to avoid an obstacle. In the current prototypes the car either stops or changes its lane even if it causes other traffic disruptions. However, there are situations when it is preferable to collide with the object, for example a foam box, rather than take an action that could result in a much more serious accident than collision with the object. In this dissertation, for the first time, we have presented a novel method to discriminate between physical properties of these types of objects such as bounciness, elasticity, etc. based on their motion characteristics . The proposed algorithm is tested on synthetic data, and, as a proof of concept, its effectiveness on a limited set of real-world data is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "3D Human Pose Estimation"

1

Brauer, Jürgen. Human Pose Estimation With Implicit Shape Models. Saint Philip Street Press, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "3D Human Pose Estimation"

1

Zhou, Zhiheng, Yue Cao, Xuanying Zhu, Henry Gardner, and Hongdong Li. "3D Human Pose Estimation with 2D Human Pose and Depthmap." In Communications in Computer and Information Science, 267–74. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63820-7_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

He, Xuesheng, Huabin Wang, Yuan Qin, and Liang Tao. "3D Human Pose Estimation with Grouping Regression." In Image and Graphics Technologies and Applications, 138–49. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9917-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haque, Albert, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, and Li Fei-Fei. "Towards Viewpoint Invariant 3D Human Pose Estimation." In Computer Vision – ECCV 2016, 160–77. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46448-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Yu, Lin Zhao, Shanshan Zhang, and Jian Yang. "Coarse-to-Fine 3D Human Pose Estimation." In Lecture Notes in Computer Science, 579–92. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34113-8_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cadavid, Steven, and W. Scott Selbie. "3D Dynamic Pose Estimation from Markerless Optical Data." In Handbook of Human Motion, 197–219. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cadavid, Steven, and Scott W. Selbie. "3D Dynamic Pose Estimation from Markerless Optical Data." In Handbook of Human Motion, 1–23. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-30808-1_160-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cadavid, Steven, and W. Scott Selbie. "3D Dynamic Pose Estimation from Markerless Optical Data." In Handbook of Human Motion, 1–23. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-30808-1_160-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gall, Juergen, Angela Yao, and Luc Van Gool. "2D Action Recognition Serves 3D Human Pose Estimation." In Computer Vision – ECCV 2010, 425–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15558-1_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Valmadre, Jack, and Simon Lucey. "Deterministic 3D Human Pose Estimation Using Rigid Structure." In Computer Vision – ECCV 2010, 467–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15558-1_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Amin, Sikandar, Philipp Müller, Andreas Bulling, and Mykhaylo Andriluka. "Test-Time Adaptation for 3D Human Pose Estimation." In Lecture Notes in Computer Science, 253–64. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11752-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "3D Human Pose Estimation"

1

Chen, Ching-Hang, and Deva Ramanan. "3D Human Pose Estimation = 2D Pose Estimation + Matching." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Min, Xipeng Chen, Wentao Liu, Chen Qian, Liang Lin, and Lizhuang Ma. "DRPose3D: Depth Ranking in 3D Human Pose Estimation." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/136.

Full text
Abstract:
In this paper, we propose a two-stage depth ranking based method (DRPose3D) to tackle the problem of 3D human pose estimation. Instead of accurate 3D positions, the depth ranking can be identified by human intuitively and learned using the deep neural network more easily by solving classification problems. Moreover, depth ranking contains rich 3D information. It prevents the 2D-to-3D pose regression in two-stage methods from being ill-posed. In our method, firstly, we design a Pairwise Ranking Convolutional Neural Network (PRCNN) to extract depth rankings of human joints from images. Secondly, a coarse-to-fine 3D Pose Network(DPNet) is proposed to estimate 3D poses from both depth rankings and 2D human joint locations. Additionally, to improve the generality of our model, we introduce a statistical method to augment depth rankings. Our approach outperforms the state-of-the-art methods in the Human3.6M benchmark for all three testing protocols, indicating that depth ranking is an essential geometric feature which can be learned to improve the 3D pose estimation.
APA, Harvard, Vancouver, ISO, and other styles
3

Bao, Wenxia, Zhongyu Ma, Dong Liang, Xianjun Yang, and Tao Niu. "Pose ResNet: A 3D human pose estimation network model." In 2023 2nd International Conference on Big Data, Information and Computer Network (BDICN). IEEE, 2023. http://dx.doi.org/10.1109/bdicn58493.2023.00061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jack, Dominic, Frederic Maire, Anders Eriksson, and Sareh Shirazi. "Adversarially Parameterized Optimization for 3D Human Pose Estimation." In 2017 International Conference on 3D Vision (3DV). IEEE, 2017. http://dx.doi.org/10.1109/3dv.2017.00026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meng, Wenming, Tao Hu, and Li Shuai. "3D Human Pose Estimation With Adversarial Learning." In 2019 International Conference on Virtual Reality and Visualization (ICVRV). IEEE, 2019. http://dx.doi.org/10.1109/icvrv47840.2019.00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Precise 3D Pose Estimation of Human Faces." In International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and and Technology Publications, 2014. http://dx.doi.org/10.5220/0004741706180625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Greif, Thomas, Rainer Lienhart, and Debabrata Sengupta. "Monocular 3D human pose estimation by classification." In 2011 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2011. http://dx.doi.org/10.1109/icme.2011.6011915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"3D HUMAN BODY POSE ESTIMATION BY SUPERQUADRICS." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003862202940302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tao, Siting, and Zhi Zhang. "Video-Based 3D Human Pose Estimation Research." In 2022 IEEE 17th Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2022. http://dx.doi.org/10.1109/iciea54703.2022.10005955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Zhenghui, Anna Wang, Chunguang Bu, and Chen Liu. "3D Human Pose Estimation Using RGBD Camera." In 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). IEEE, 2021. http://dx.doi.org/10.1109/cei52496.2021.9574486.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "3D Human Pose Estimation"

1

Video-based 3D pose estimation for residential roofing (dataset). U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, August 2022. http://dx.doi.org/10.26616/nioshrd-1042-2022-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography