Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Skeletal animation.

Статті в журналах з теми "Skeletal animation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Skeletal animation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Xu, Tianchen, Mo Chen, Ming Xie, and Enhua Wu. "A Skinning Method in Real-time Skeletal Character Animation." International Journal of Virtual Reality 10, no. 3 (January 1, 2011): 25–31. http://dx.doi.org/10.20870/ijvr.2011.10.3.2818.

Повний текст джерела
Анотація:
With regard to skeletal character animation, the question on how to ensure the quality of transition between key frames is of crucial importance. The lack of properly defined motion ranges based on movements would leave the animator with no choice but intervene the result based on camera perspective afterwards, thus creating a lot more work for the animator to modify or clean up the animation curves. Although a number of methods have been raised in these years, such as the Linear Blending Skinning (LBS), they may still have shortcomings in some specific cases, one of which is the obvious unnatural deformation around the joint areas. The primary investigation in this paper is directed to address the problem and improve the framework of rendering in real-time environment with satisfactory skinning effect to the aforementioned scenario, with assistance of GPU computation
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Pelechano, Nuria, Bernhard Spanlang, and Alejandro Beacco. "Avatar Locomotion in Crowd Simulation." International Journal of Virtual Reality 10, no. 1 (January 1, 2011): 13–19. http://dx.doi.org/10.20870/ijvr.2011.10.1.2796.

Повний текст джерела
Анотація:
This paper presents an Animation Planning Mediator (APM) designed to synthesize animations efficiently for virtual characters in real time crowd simulation. From a set of animation clips, the APM selects the most appropriate and modifies the skeletal configuration of each character to satisfy desired constraints (e.g. eliminating foot-sliding or restricting upper body torsion), while still providing natural looking animations. We use a hardware accelerated character animation library to blend animations increasing the number of possible locomotion types. The APM allows the crowd simulation module to maintain control of path planning, collision avoidance and response. A key advantage of our approach is that the APM can be integrated with any crowd simulator working in continuous space. We show visual results achieved in real time for several hundreds of agents, as well as the quantitative ac-curacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jiang, Na, and Wei Zhao. "Study on Skeletal Animation in Virtual Reality." Applied Mechanics and Materials 433-435 (October 2013): 434–37. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.434.

Повний текст джерела
Анотація:
For taking the advantages of skeletal animation, different model should be treated separately during the drawing process. For the sophisticated character, skeletal animation can apply binary compression and the method of saving common sequence. Meanwhile, draw the advanced programmable GPU technology into the progress of dealing with the updating of skeletal animation vertex, then analysis and comparison its performance. The results show that the new algorithm will greatly reduce RAM utilization. And the introduction of GPU computing method can perfectly assist the CPU to complete a large number of graphics operations, reducing CPU utilization, thereby improve the efficiency of skeletal animation rendering fundamentally.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Seron, F. J., R. Rodriguez, E. Cerezo, and A. Pina. "Adding support for high-level skeletal animation." IEEE Transactions on Visualization and Computer Graphics 8, no. 4 (October 2002): 360–72. http://dx.doi.org/10.1109/tvcg.2002.1044521.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Jituo, Guodong Lu, and Juntao Ye. "Automatic skinning and animation of skeletal models." Visual Computer 27, no. 6-8 (April 21, 2011): 585–94. http://dx.doi.org/10.1007/s00371-011-0585-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Paduraru, Ciprian, and Miruna Paduraru. "Techniques for Skeletal-Based Animation in Massive Crowd Simulations." Computers 11, no. 2 (February 4, 2022): 21. http://dx.doi.org/10.3390/computers11020021.

Повний текст джерела
Анотація:
Crowd systems play an important role in virtual environment applications, such as those used in entertainment, education, training, and different simulation systems. Performance and scalability are key factors, and it is desirable for crowds to be simulated with as few resources as possible while providing variety and realism for agents. This paper focuses on improving the performance, variety, and usability of crowd animation systems. Performing the blending operation on the Graphics Processing Unit (GPU) side requires no additional memory other than the source and target animation streams and greatly increases the number of agents that can simultaneously transition from one state to another. A time dilation offset feature helps applications with a large number of animation assets and/or agents to achieve sufficient visual quality, variety, and good performance at the same time by moving animation streams between the running and paused states at runtime. Splitting agents into parts not only reduces asset creation costs by eliminating the need to create permutations of skeletons and assets but also allows users to attach parts dynamically to agents.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rohmer, Damien, Marco Tarini, Niranjan Kalyanasundaram, Faezeh Moshfeghifar, Marie-Paule Cani, and Victor Zordan. "Velocity Skinning for Real‐time Stylized Skeletal Animation." Computer Graphics Forum 40, no. 2 (May 2021): 549–61. http://dx.doi.org/10.1111/cgf.142654.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bao, Wenrui. "The Application of Intelligent Algorithms in the Animation Design of 3D Graphics Engines." International Journal of Gaming and Computer-Mediated Simulations 13, no. 2 (April 2021): 26–37. http://dx.doi.org/10.4018/ijgcms.2021040103.

Повний текст джерела
Анотація:
With the rapid improvement of computer hardware capabilities and the reduction of cost, the quality of game pictures has made a qualitative breakthrough, which has reached or exceeded the picture effect of many dedicated virtual reality engines. On the basis of the design and implementation of the virtual reality 3D engine, the rendering queue management method is proposed to improve the frame rate. Based on the object-oriented design method, emitter regulator particle rendering mode, and traditional bone skin animation technology, the key structure technology in skeletal animation is analyzed, and the animation controller used to control animation playback and key structure interpolation operation is designed, which achieves the ideal animation effect. Finally, a prototype system based on engine is implemented.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Xie, Kong Kai, Yan Chun Shen, and Li Ni Ma. "The Driving Mechanism of Virtual Human's Action and Facial Expression Based on OSG." Applied Mechanics and Materials 536-537 (April 2014): 386–89. http://dx.doi.org/10.4028/www.scientific.net/amm.536-537.386.

Повний текст джерела
Анотація:
According to the poor code quality,low render low efficiency, and limited scalability of virtual human drive mechanism and other problems, 3D rendering engine Open Scene Graph(OSG) has realized the efficient simulation and expression of virtual human action.Firstly, on the basis of the skeletal animation and spherical interpolation algorithm to achieve the virtual human motion animation.Then design the generating methods of the virtual facial expression animation related to the behavior consciousness by using three Bessel interpolation algorithm and based on deformation animation technology.Finally by using the animation technology in the OSG,the synthesis of virtual human motion and expression is realized based on the parameter control.Experience confirms that under the same number of scene nodes, using OSG engine code is more concise, and render faster.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Huang, Peng, Margara Tejera, John Collomosse, and Adrian Hilton. "Hybrid Skeletal-Surface Motion Graphs for Character Animation from 4D Performance Capture." ACM Transactions on Graphics 34, no. 2 (March 2, 2015): 1–14. http://dx.doi.org/10.1145/2699643.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Xiang, Donglai, Fabian Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, and Chenglei Wu. "Modeling clothing as a separate layer for an animatable human avatar." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–15. http://dx.doi.org/10.1145/3478513.3480545.

Повний текст джерела
Анотація:
We have recently seen great progress in building photorealistic animatable full-body codec avatars, but generating high-fidelity animation of clothing is still difficult. To address these difficulties, we propose a method to build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos. We use a two-layer mesh representation to register each 3D scan separately with the body and clothing templates. In order to improve the photometric correspondence across different frames, texture alignment is then performed through inverse rendering of the clothing geometry and texture predicted by a variational autoencoder. We then train a new two-layer codec avatar with separate modeling of the upper clothing and the inner body layer. To learn the interaction between the body dynamics and clothing states, we use a temporal convolution network to predict the clothing latent code based on a sequence of input skeletal poses. We show photorealistic animation output for three different actors, and demonstrate the advantage of our clothed-body avatars over the single-layer avatars used in previous work. We also show the benefit of an explicit clothing model that allows the clothing texture to be edited in the animation output.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lin, Hwai-Ting, Yasuo Nakamura, Fong-Chin Su, Jun Hashimoto, Katsuya Nobuhara, and Edmund Y. S. Chao. "Use of Virtual, Interactive, Musculoskeletal System (VIMS) in Modeling and Analysis of Shoulder Throwing Activity." Journal of Biomechanical Engineering 127, no. 3 (January 1, 2005): 525–30. http://dx.doi.org/10.1115/1.1894387.

Повний текст джерела
Анотація:
Our purpose in this study was to apply the virtual, interactive, musculoskeletal system (VIMS) software for modeling and biomechanical analysis of the glenohumeral joint during a baseball pitching activity. The skeletal model was from VIMS library and muscle fiber attachment sites were derived from the visible human dataset. The muscular moment arms and function changes are mainly due to the large humeral motion involved during baseball pitching. The graphic animation of the anatomic system using VIMS software is an effective tool to model and visualize the complex anatomical structure of the shoulder for biomechanical analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sulema, Yevgeniya, and Ihor Los. "LEVELS-OF-DETAIL GENERATION METHOD FOR SKELETAL MESHES." System technologies 6, no. 125 (December 27, 2019): 3–14. http://dx.doi.org/10.34185/1562-9945-6-125-2019-01.

Повний текст джерела
Анотація:
This paper is devoted to the development of an algorithm for Levels-Of-Detail generation from skinned meshes. Animated meshes, unlike static ones, cannot be simplified without redistributing or recalculation bone weights. In some cases, objects of rendered scene have redundant details. It happens when their size on a screen, the distance from a virtual camera and other factors are such that there is no sense to display these objects in their full complexity, as it may significantly impact time for rendering one frame. One of the solutions is to create a set of Levels-Of-Detail for each object – a set of meshes and/or texture which represent same object, but with lower level of detail – and change the original object with them, when it is necessary. The simplification of visual models is especially important for visualisation of digital twins of real-world objects, subjects, or processes within the digital twin technology. An analysis of existing algorithms for Levels-Of-Detail generation for animated meshes is presented and discussed. An improved method for Levels-Of-Detail generation is introduced and discussed. The proposed method is based on Houle and Poulin animated mesh simplification. However, there are the following core differences in the proposed method: weights of resulting vertices are interpolated, not just copied; multiple poses are used for simplification input. These new features allow to achieve the animated meshes simplification without significant drawbacks in animation quality and mesh optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Guiard-Marigny, Thierry, and David J. Ostry. "A System for Three-Dimensional Visualization of Human Jaw Motion in Speech." Journal of Speech, Language, and Hearing Research 40, no. 5 (October 1997): 1118–21. http://dx.doi.org/10.1044/jslhr.4005.1118.

Повний текст джерела
Анотація:
With the development of precise three-dimensional motion measurement systems and powerful computers for three-dimensional graphical visualization, it is possible to record and fully reconstruct human jaw motion. In this paper, we describe a visualization system for displaying three-dimensional jaw movements in speech. The system is designed to take as input jaw motion data obtained from one or multi-dimensional recording systems. In the present application, kinematic records of jaw motion were recorded using an optoelectronic measurement system (Optotrak). The corresponding speech signal was recorded using an analog input channel. The three orientation angles and three positions that describe the motion of the jaw as a rigid skeletal structure were derived from the empirical measurements. These six kinematic variables, which in mechanical terms account fully for jaw motion kinematics, act as inputs that drive a real-time three-dimensional animation of a skeletal jaw and upper skull. The visualization software enables the user to view jaw motion from any orientation and to change the viewpoint during the course of an utterance. Selected portions of an utterance may be replayed and the speed of the visual display may be varied. The user may also display, along with the audio track, individual kinematic degrees of freedom or several degrees of freedom in combination. The system is presently being used as an educational tool and for research into audio-visual speech recognition. Interested researchers may obtain the software and source code free of charge from the authors.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sun, Qiyun, Wanggen Wan, Xiang Feng, and Guoliang Chen. "A Novel Animation Method Based on Mesh Decimation." Journal of Advanced Computational Intelligence and Intelligent Informatics 22, no. 2 (March 20, 2018): 184–93. http://dx.doi.org/10.20965/jaciii.2018.p0184.

Повний текст джерела
Анотація:
Skeleton based skin deformation methods are widely used in computer animations, with the help of some animation software, like 3D Studio Max and Maya. Most of these animation methods are based on linear blending skinning algorithm and its improved versions, showing good real-time performance. However, it is difficult for new users to use these complicated softwares to make animation. In this paper, we focus on surface based mesh deformation methods. We use spokes and rims deformation method to animate mesh models. However, this method shows poor real-time performance with high-resolution mesh models. We propose a novel animation method based on mesh decimation, making it possible to animate high-resolution mesh models in real time with the spokes and rims method. In this way, users only need to control the movement of handles to acquire intuitively reasonable animation of arbitrary mesh model. It is easier and more convenient for users to make their own animation. The experimental results show that the proposed animation method is feasible and effective and shows great real-time performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Zhang, Bin, Shuang Wang, Yuting Liu, and Huayong Yang. "Research on Trajectory Planning and Autodig of Hydraulic Excavator." Mathematical Problems in Engineering 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/7139858.

Повний текст джерела
Анотація:
As the advances in computer control technology keep emerging, robotic hydraulic excavator becomes imperative. It can improve excavation accuracy and greatly reduce the operator’s labor intensity. The 12-ton backhoe bucket excavator has been utilized in this research work where this type of excavator is commonly used in engineering work. The kinematics model of operation device (boom, arm, bucket, and swing) in excavator is established in both Denavit-Hartenberg coordinates for easy programming and geometric space for avoiding blind spot. The control approach is based on trajectory tracing method with displacements and velocities feedbacks. The trajectory planning and autodig program is written by Visual C++. By setting the bucket teeth’s trajectory, the program can automatically plan the velocity and acceleration of each hydraulic cylinder and motor. The results are displayed through a 3D entity simulation environment which can present real-time movements of excavator kinematics. Object-Oriented Graphics Rendering Engine and skeletal animation are used to give accurate parametric control and feedback. The simulation result shows that a stable linear autodig can be achieved. The errors between trajectory planning command and simulation model are analyzed.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Hemami, Hooshang. "A General Framework for Rigid Body Dynamics, Stability, and Control." Journal of Dynamic Systems, Measurement, and Control 124, no. 2 (May 10, 2002): 241–51. http://dx.doi.org/10.1115/1.1468227.

Повний текст джерела
Анотація:
Augmented state spaces for the representation of systems that include rigid bodies, actuators, controllers, and integrate mechanical, electrical, sensory, and computational subsystems, are proposed here. The formulation is based on the Newton-Euler point of view, and has many advantages in stability, control, simulation, and computational considerations. The formulation is developed here for a one- and two-link three-dimensional rigid body system. Three simulations are presented to study stability of the system and to demonstrate feasibility and application of the formulation. The formulation affords an embedding of the system in a larger state space. The rigid body system can be stabilized, in the sense of Lyapunov, in this larger space with very general and minimally restricted feedback structures. The formulation is modular to implementation and is computationally efficient. The method offers alternative states that are easier to control and measure than Euler angles. Thus, the formulation offers advantages from a sensory and instrumentation point of view. The formulation is versatile, and yields conveniently to applications in studies of human neuro-musculo-skeletal systems, robotic systems, marionettes and humanoids for animation and simulation of crash and other injury prone maneuvers and sports. It offers a methodical and systematic procedure for formulation of large systems of interconnected rigid bodies.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Valencia-Marin, Cristian Kaori, Juan Diego Pulgarin-Giraldo, Luisa Fernanda Velasquez-Martinez, Andres Marino Alvarez-Meza, and German Castellanos-Dominguez. "An Enhanced Joint Hilbert Embedding-Based Metric to Support Mocap Data Classification with Preserved Interpretability." Sensors 21, no. 13 (June 29, 2021): 4443. http://dx.doi.org/10.3390/s21134443.

Повний текст джерела
Анотація:
Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class).
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Razzaq, Abdul, Zhongke Wu, Mingquan Zhou, Sajid Ali, and Khalid Iqbal. "Automatic Conversion of Human Mesh into Skeleton Animation by Using Kinect Motion." International Journal of Computer Theory and Engineering 7, no. 6 (December 2015): 482–88. http://dx.doi.org/10.7763/ijcte.2015.v7.1006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Musoni, Pietro, Riccardo Marin, Simone Melzi, and Umberto Castellani. "A functional skeleton transfer." Proceedings of the ACM on Computer Graphics and Interactive Techniques 4, no. 3 (September 22, 2021): 1–15. http://dx.doi.org/10.1145/3480140.

Повний текст джерела
Анотація:
The animation community has spent significant effort trying to ease rigging procedures. This is necessitated because the increasing availability of 3D data makes manual rigging infeasible. However, object animations involve understanding elaborate geometry and dynamics, and such knowledge is hard to infuse even with modern data-driven techniques. Automatic rigging methods do not provide adequate control and cannot generalize in the presence of unseen artifacts. As an alternative, one can design a system for one shape and then transfer it to other objects. In previous work, this has been implemented by solving the dense point-to-point correspondence problem. Such an approach requires a significant amount of supervision, often placing hundreds of landmarks by hand. This paper proposes a functional approach for skeleton transfer that uses limited information and does not require a complete match between the geometries. To do so, we suggest a novel representation for the skeleton properties, namely the functional regressor, which is compact and invariant to different discretizations and poses. We consider our functional regressor a new operator to adopt in intrinsic geometry pipelines for encoding the pose information, paving the way for several new applications. We numerically stress our method on a large set of different shapes and object classes, providing qualitative and numerical evaluations of precision and computational efficiency. Finally, we show a preliminar transfer of the complete rigging scheme, introducing a promising direction for future explorations.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Guijuan, Dengming Zhu, Xianjie Qiu, and Zhaoqi Wang. "Skeleton-based control of fluid animation." Visual Computer 27, no. 3 (September 18, 2010): 199–210. http://dx.doi.org/10.1007/s00371-010-0526-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

de Aguiar, Edilson, Christian Theobalt, Sebastian Thrun, and Hans-Peter Seidel. "Automatic Conversion of Mesh Animations into Skeleton-based Animations." Computer Graphics Forum 27, no. 2 (April 2008): 389–97. http://dx.doi.org/10.1111/j.1467-8659.2008.01136.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Jituo, and Guodong Lu. "Skeleton driven animation based on implicit skinning." Computers & Graphics 35, no. 5 (October 2011): 945–54. http://dx.doi.org/10.1016/j.cag.2011.07.005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tan, Shuqing. "Animation Image Art Design Mode Using 3D Modeling Technology." Wireless Communications and Mobile Computing 2022 (March 3, 2022): 1–9. http://dx.doi.org/10.1155/2022/6577461.

Повний текст джерела
Анотація:
This paper starts with the external visual performance of animation characters, discusses the design style of three-dimensional animation characters, and integrates with traditional art, and makes a new research and attempt from the combination of art and technology, so that nonprofessionals can easily design three-dimensional animation characters. Aiming at the problem of low recognition level of behavior control data points in traditional 3D virtual animation model method, a method of role modeling and behavior control in 3D virtual animation design is designed. Based on the physical engine, a dynamic model of the character skeleton is established, and the joint motion trajectory is simulated to complete the real-time rendering of the effect. Combined with the case analysis, it is discussed from the aspects of animation character modeling, user experience, and so on. Experimental results show that, compared with traditional methods, the data points collected by the proposed method in the process of character behavior control are more dense, the animation effect is more realistic, and it is highly effective and superior.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Shu, Heng Sheng, Xun Lin Li, Yao Deng, Tong Jun Qv, Yun Sheng Li, Wu Yuan, and Heng Shao. "Establishment and Application of Human Interactive Three-Dimensional Spine Software." Applied Mechanics and Materials 140 (November 2011): 132–36. http://dx.doi.org/10.4028/www.scientific.net/amm.140.132.

Повний текст джерела
Анотація:
It is widely acknowledged that image thinking is significant in medical education. The colorful pictures and animations need to be presented for the users (observers) to make the content easier to be understood. By the appropriate matching of pictures and text, readers are able to associate the text with the image on the screen to reach better understanding. In this case, 3d and dynamic images will take the place of the plane and static ones. The study software applied the image processing software and animation software, such as Photoshop, Flash software. 3d max software is used for processing and preparation the simulation stereo skeleton. the Java language of Cult 3D models help give the characteristics of model rotating and zooming in and out; Using multimedia interactive software, Neobook, makes sure interactive amity of interface and establishes a good interactive operating environment between the users and 3d models. For the basic medical education which has higher visual thinking demand, multimedia education has already be widespreaded and used. This software has interactive function and three-dimensional content, 3d instead of static, dynamic replace static state. It is not only keeping the traditional advantage (such as the logical arrangement of content, strictly text description, etc.), but also providing the learning reference material to the users which has intuitive visual experience, independently and freely self-selection, flexible processing. It is beneficial to understand and learn related contents. This study software shows rich and colorful pictures, 3d dynamic display contents, animation process of image to the users, so as to increase understanding and grasping the contents on human spine. It has multiple functions such as help studying and teaching.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ismail, Ismahafezi, Mohd Shahrizal Sunar, and Cik Suhaimi Yusof. "Exploring Dynamic Character Motion Techniques for Games Development." International Journal of Virtual Reality 11, no. 1 (January 1, 2012): 51–57. http://dx.doi.org/10.20870/ijvr.2012.11.1.2837.

Повний текст джерела
Анотація:
3D character development is very important part in the character animation. Currently, animation researchers try to control their virtual character joint and make their character motion more realistic and look like real human movement. Using motion capture technology, input data for character movement can be manipulated. This paper presents a current motion research in the real time animation character and focused in dynamic motion control considering physic for game development. From this paper, the researcher can get better understanding what is the main issues and relevant technique that used by the recent researchers in this area. This review focuses on three main parts in dynamic motion generation with physics consideration and control: skeleton hierarchy and kinematics, motion capture data animation, and active dynamic control.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Kang, Ji-Hoon, and Seon-Jong Kim. "3D Animation Contents Production System Using Skeleton Mapping." Journal of Korean Institute of Information Technology 16, no. 8 (August 31, 2018): 73–81. http://dx.doi.org/10.14801/jkiit.2018.16.8.73.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Qiu, Jie, Hock Soon Seah, Feng Tian, Quan Chen, Zhongke Wu, and Konstantin Melikhov. "Auto Coloring with Enhanced Character Registration." International Journal of Computer Games Technology 2008 (2008): 1–7. http://dx.doi.org/10.1155/2008/135398.

Повний текст джерела
Анотація:
An enhanced character registration method is proposed in this paper to assist the auto coloring for 2D animation characters. After skeletons are extracted, the skeleton of the character in a target frame is relocated based on a stable branch in a reference frame. Subsequently the characters among a sequence are automatically matched and registered. Occlusion are then detected and located in certain components segmented from the character. Two different approaches are applied to color regions in components without and with occlusion respectively. The approach has been tested for coloring a practical animation sequence and achieved high coloring accuracy, showing its applicability in commercial animation production.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Koukam, A., and H. Fourar. "Combining objects and planning paradigms for human skeleton animation." Engineering Applications of Artificial Intelligence 11, no. 4 (August 1998): 461–68. http://dx.doi.org/10.1016/s0952-1976(98)00034-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Lv, Jianchao, and Shuangjiu Xiao. "Real-Time 3D Motion Recognition of Skeleton Animation Data Stream." International Journal of Machine Learning and Computing 3, no. 5 (October 2013): 430–34. http://dx.doi.org/10.7763/ijmlc.2013.v3.354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ijiri, Takashi, Kenshi Takayama, Hideo Yokota, and Takeo Igarashi. "ProcDef: Local-to-global Deformation for Skeleton-free Character Animation." Computer Graphics Forum 28, no. 7 (October 2009): 1821–28. http://dx.doi.org/10.1111/j.1467-8659.2009.01559.x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Song, Zhijun, Jun Yu, Changle Zhou, Dapeng Tao, and Yi Xie. "Skeleton correspondence construction and its applications in animation style reusing." Neurocomputing 120 (November 2013): 461–68. http://dx.doi.org/10.1016/j.neucom.2013.03.042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Xiao, Zhidong, Hammadi Nait-Charif, and Jian J. Zhang. "Real time automatic skeleton and motion estimation for character animation." Computer Animation and Virtual Worlds 20, no. 5-6 (September 2009): 523–31. http://dx.doi.org/10.1002/cav.276.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Arachchi, S. P. Kasthuri, Chengkai Xiang, W. G. C. W. Kumara, Shih-Jung Wuis, and Timothy K. Shih. "Motion tracking by sensors for real-time human skeleton animation." International Journal on Advances in ICT for Emerging Regions (ICTer) 9, no. 2 (January 4, 2018): 10. http://dx.doi.org/10.4038/icter.v9i2.7180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Mourot, Lucas, Ludovic Hoyet, François Le Clerc, François Schnitzler, and Pierre Hellier. "A Survey on Deep Learning for Skeleton‐Based Human Animation." Computer Graphics Forum 41, no. 1 (November 21, 2021): 122–57. http://dx.doi.org/10.1111/cgf.14426.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Kurniawati, Arik, Ari Kusumaningsih, and Yanuar Aliffio. "Clothing size recommender on real-time fitting simulation using skeleton tracking and rigging." Jurnal Teknologi dan Sistem Komputer 8, no. 2 (February 24, 2020): 127–32. http://dx.doi.org/10.14710/jtsiskom.8.2.2020.127-132.

Повний текст джерела
Анотація:
Virtual fitting room (VFR) is a technology that replaces conventional fitting rooms. The VFR is not only available in shops, malls, and any shopping center but also in online stores, which makes VFR technology more and more developed, primarily to support online garment sales. VFR become a trending research interest since Microsoft has developed a Kinect tracking system. In this paper, we proposed the interactive 3D virtual fitting room using Microsoft's Kinect tracking and the rigging technique from 3D Modeling Blender and to implement the VFR. VFR manages the progress of virtual fitting that forms the three-dimensional simulations and visualization of garments on virtual counterparts of the real prospective buyer (user). Users can view the clothing animation on the various poses that are following the user body movements. The system can evaluate the user’s match, guiding them to choose the suitable size of the clothes using Euclidean distance.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zhang, Xiao-Shuang, and Dong-Hyuk Choi. "Color Analysis of the Skeleton images in Animation - Focused on Coco." Cartoon and Animation Studies 59 (June 30, 2020): 155–75. http://dx.doi.org/10.7230/koscas.2020.59.155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chen, Cheng-Hao, Ming-Han Tsai, I.-Chen Lin, and Pin-Hua Lu. "Skeleton-driven surface deformation through lattices for real-time character animation." Visual Computer 29, no. 4 (November 1, 2012): 241–51. http://dx.doi.org/10.1007/s00371-012-0759-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Nowak, Marta, and Robert Sitnik. "High-Detail Animation of Human Body Shape and Pose From High-Resolution 4D Scans Using Iterative Closest Point and Shape Maps." Applied Sciences 10, no. 21 (October 26, 2020): 7535. http://dx.doi.org/10.3390/app10217535.

Повний текст джерела
Анотація:
In this article, we present a method of analysis for 3D scanning sequences of human bodies in motion that allows us to obtain a computer animation of a virtual character containing both skeleton motion and high-detail deformations of the body surface geometry, resulting from muscle activity, the dynamics of the motion, and tissue inertia. The developed algorithm operates on a sequence of 3D scans with high spatial and temporal resolution. The presented method can be applied to scans in the form of both triangle meshes and 3D point clouds. One of the contributions of this work is the use of the Iterative Closest Point algorithm with motion constraints for pose tracking, which has been problematic so far. We also introduce shape maps as a tool to represent local body segment deformations. An important feature of our method is the possibility to change the topology and resolution of the output mesh and the topology of the animation skeleton in individual sequences, without requiring time-consuming retraining of the model. Compared to the state-of-the-art Skinned Multi-Person Linear (SMPL) method, the proposed algorithm yields almost twofold better accuracy in shape mapping.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Abbott, Emily M., Zoe Merchant, Erica Lee, Sadie M. Abernathy, Charles Hammer, Young-Hui Chang, and Jason T. Bariteau. "Evaluation of True Ankle Motion Following Total Ankle Replacement Utilizing XROMM technology." Foot & Ankle Orthopaedics 5, no. 4 (October 1, 2020): 2473011420S0001. http://dx.doi.org/10.1177/2473011420s00017.

Повний текст джерела
Анотація:
Category: Ankle Arthritis Introduction/Purpose: Total ankle replacement (TAR) is common tool used by the foot and ankle specialist to treat end stage ankle arthritis. Current data about ankle motion following TAR is derived from gait analysis utilizing external markers. Utilizing Xray Reconstruction of Moving Morphology (XROMM), which combines 3-D mapping technology with biplanar fluoroscopy in vivo to visualize true skeletal motion, we can evaluate true motion of TAR implants. Current TAR replacement systems are either mobile bearing or fixed bearing. We hypothesized that subjects implanted with a fixed bearing prosthesis would exhibit less tibiotalar rotation and translation than subjects implanted with a mobile bearing prosthesis. Methods: Six subjects with total ankle replacement at least one-year post implantation gave informed consent before participating (IRB #H16496). Three subjects with a mobile bearing prosthesis with an average age 63.3+-11.1 yrs were compared to three matched subjects with a fixed bearing prosthesis with an average age of 64.7+-1.5 yrs. Utilizing 3D slicer software, lower body CT scans for each subject were evaluated to create 3D models of the foot and ankle bones and implant components. All subjects walked for several trials at a self-selected pace along a walkway while their foot and ankle motions were captured by a high-speed biplanar fluoroscopic x-ray motion analysis (XMA) system. The 3D models were combined with the x-ray images within a 3D animation platform and rotoscoped to resolve accurate kinematic motions at the tibiotalar joint during stance phase of gait. We examined for differences between the two groups using a two-sample t-test (p<0.05). Results: Subjects with a mobile-bearing prosthesis demonstrated mean ROM’s of 7.4+-1.1°, 5.3+-2.3° and 7.1+-4.3° for dorsiflexion/plantarflexion, inversion/eversion, and internal/external rotation, respectively. Subjects with a fixed bearing ankle prosthesis did not exhibit significantly different mean ROM’s for dorsiflexion/plantarflexion (9.1+-4.0°, p=0.35), inversion/eversion (4.4+-2.1°, p=0.42), and internal/external rotation (9.0+-3.4°, p=0.35), respectively. Subjects with a fixed bearing prosthesis displayed significantly more translation along the anteroposterior (3.6+-1.2mm, p<0.01) and mediolateral (2.2+-0.7mm, p<0.01) axes compared to the mobile bearing prosthesis (1.8+-1.2mm and 1.3+-0.8mm, respectively). Conclusion: Our preliminary results indicate that mobile and fixed bearing prosthesis provides similar angular motion at the tibiotalar joint, however, the fixed bearing prosthesis exhibits greater translational motion during walking. Further, there is the same amount of internal and external translation with both component designs. The implications of this work on success or failure of current implant designs is beyond the scope of this study but this work will provide the basis for future studies to help determine optimal future total ankle replacement designs.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Толмачева, Ю. П., and Yu P. Tolmacheva. "3D Modeling and Animation of Visceral Skeleton Fish: Testing Four-Bar Mechanisms." Mathematical Biology and Bioinformatics 8, no. 2 (October 31, 2013): 513–19. http://dx.doi.org/10.17537/2013.8.513.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kang, Ji-Hoon, Bum-Joo Shin, Soon-Bok Kwon, and Seon-Jong Kim. "A Skeleton 3D Animation Modeling of Bird Flapping by Using Motion Capture." Journal of Korean Institute of Information Technology 15, no. 1 (January 31, 2017): 151. http://dx.doi.org/10.14801/jkiit.2017.15.1.151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

López-Colino, Fernando, José Colás, and Javier Garrido. "Full Skeleton-based Virtual Human Animation: An Improved Design for SL Avatars." IETE Technical Review 34, no. 1 (April 26, 2016): 11–21. http://dx.doi.org/10.1080/02564602.2016.1141075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wu, Fu-Che, Wan-Chun Ma, Rung-Huei Liang, Bing-Yu Chen, and Ming Ouhyoung. "Domain connected graph: the skeleton of a closed 3D shape for animation." Visual Computer 22, no. 2 (November 24, 2005): 117–35. http://dx.doi.org/10.1007/s00371-005-0357-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Seidl, Andreas. "The Ergonomic Tool Anthropos in Virtual Reality - Requirements, Methods and Realisation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 38 (July 2000): 818–21. http://dx.doi.org/10.1177/154193120004403837.

Повний текст джерела
Анотація:
The integration of ergonomic simulation in Digital Mockup and Virtual Reality requires an adapted digital man model design. Virtual ANTHROPOS was developed for the needs of this technology. Beside a realistic visualization of the human being (skeleton, different bodytypes, consideration of clothing), complex animation algorithm and automatic behavior simulation are necessary. Increased ergonomic functionalities allow the fast analysis of workplaces and products. In cooperation with universities and companies Virtual ANTHROPOS was integrated in different Virtual Reality platforms and is used to analyze, design and sale cars and airplanes.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Gunanto, Samuel Gandang, Matahari Bhakti Nendya, Mochamad Hariadi, and Eko Mulyanto Yuniarno. "Deformasi Wajah Karakter Kartun Berbasis Klaster Titik Fitur Gerak." Journal of Urban Society's Arts 2, no. 1 (April 1, 2015): 55–61. http://dx.doi.org/10.24821/jousa.v2i1.1269.

Повний текст джерела
Анотація:
Pendekatan tradisional animasi ekspresi wajah sangat tergantung pada animatordalam pembuatan gerakan kunci dan rangkaian gerakan ekspresi wajah.Problematika yang sering dijumpai adalah penggunaan kerangka dan gerakan wajahyang sama untuk model yang berbeda membutuhkan waktu yang lama dikarenakankompleksitas ekspresi wajah manusia. Pendekatan simulasi kulit wajah dan ototpada praktiknya masih memerlukan intervensi animator untuk pengaturan kulitwajah terhadap tulang/tengkorak kepala dan konfigurasi sambungan otot gerakdi wajah. Hal ini menyebabkan produksi animasi wajah untuk satu wajah tidakdapat digunakan ulang secara langsung untuk wajah lainnya karena kekhususannyatersebut. Oleh karena itu, proses pengamatan perubahan bentuk ekspresi wajahdengan adanya area bobot pada model wajah 3D menggunakan pendekatanklaster di titik fitur gerak mempunyai peran penting untuk mengidentifikasi prosespenyesuaian bentuk wajah yang berlainan dan variasi pengaruh gerakan pada wajahkarakter kartun.Cartoon Character Face Deformation Based on Motion Feature-Point Cluster.The traditional approach animated facial expression is highly dependent on animatorto create key of movement and continuity the motion of facial expressions. The problemsfrequently encountered is the use of the skeleton and the same facial movements fordifferent models takes a long time because of the complexity on human facial expressions.Simulation approach to facial skin and muscles in practice still requires interventionanimators to control the facial skin to bone/skull and connection configuration in facialmuscle movement. This leads to the production of facial animation for one face can’tbe reused directly to the other face model because of their specialization. Therefore, theobservation of deformation facial expressions with weights area on a 3D face model usingmotion feature-point cluster approach have an important role to identify the adjustmentprocess on different facial shapes, and variations of movement on cartoon character face.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

SOBOTA, Branislav, Štefan KOREČKO, Sára JAVORKOVÁ, and Marián HUDÁK. "THERAPY OF UPPER LIMBS BY MEANS OF VIRTUAL REALITY TECHNOLOGIES." Acta Electrotechnica et Informatica 21, no. 3 (December 20, 2021): 30–37. http://dx.doi.org/10.15546/aeei-2021-0017.

Повний текст джерела
Анотація:
This paper deals with an approach to upper limbs therapy that uses virtual reality technologies. The previous methods and subsequent improvements of these procedures by means of a skeletal model of the upper limb in a virtual environment are presented here. So, main focus of the paper is on the description of calculation related to the bone rotation system within appropriate skeletal model. The therapist can add either more virtual upper limb objects or more virtual training objects to the virtual environment and thus expand/change the scene or the therapy complexity. The functions used in the limb movement calculations are useful for creating additional animations with various objects. With this system, the patient can be stimulated under the supervision of a therapist to practice certain rehabilitation procedures. Due to the use of collaborative web-based virtual reality, the therapy can be also applied in a remote form. The way in which the underlying idea of rehabilitation process is implemented and it is also described. In the conclusion are the some notes about system testing and evaluation including description of a therapist interface.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

SUZUKI, Seiji, Toshio TSUTA, Ryou BOUGAKIUCHI, and Takeshi IWAMOTO. "Computer System for Analysis Muscle, Activalion Characteristics During LBP Motion Using Skeleton Animation Approach." Proceedings of the JSME annual meeting 2003.5 (2003): 61–62. http://dx.doi.org/10.1299/jsmemecjo.2003.5.0_61.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Song, Jin Bao, Long Ye, Qin Zhang, and Jian Ping Chai. "Motion Redirection Based on Markov Chain Monte Carlo Particle Filter." Applied Mechanics and Materials 668-669 (October 2014): 1086–89. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.1086.

Повний текст джерела
Анотація:
This paper aims at the difficulty that lack of observation model and high-dimensional sampling in video tooning, proposes a method based on key frame matching and dual-directional Markov chain Monte Carlo sampling of video motion redirection. At first, after extracting the key frame of a given video, By affine transformation and linear superposition, the subject initializes the video’s space-time parameters and forms the observation model; Secondly, in each space-time, based on the bi-directional Markov property of each frame, This paper proposed a dual-directional Markov chain Monte Carlo sampling particle filter structure and takes full advantage of the relationship of the front and back frame of the parameters to estimate motion redirection parameters. At the same time, for high-dimensional sampling problem, the subject according to the directional parameters’ correlation implements classification of skeleton parameters-morphological parameters-physical parameters, proposes a hierarchical genetic strategy to optimize the output parameters and improves the efficiency of the algorithm. The research of this paper will produce an efficient and prominent animation expressive video motion redirection method and play an important role on video animation of the development.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Lu, Shenglian, Chunjiang Zhao, and Xinyu Guo. "Venation Skeleton-Based Modeling Plant Leaf Wilting." International Journal of Computer Games Technology 2009 (2009): 1–8. http://dx.doi.org/10.1155/2009/890917.

Повний текст джерела
Анотація:
A venation skeleton-driven method for modeling and animating plant leaf wilting is presented. The proposed method includes five principal processes. Firstly, a three-dimensional leaf skeleton is constructed from a leaf image, and the leaf skeleton is further used to generate a detailed mesh for the leaf surface. Then a venation skeleton is generated interactively from the leaf skeleton. Each vein in the venation skeleton consists of a segmented vertices string. Thirdly, each vertex in the leaf mesh is banded to the nearest vertex in the venation skeleton. We then deform the venation skeleton by controlling the movement of each vertex in the venation skeleton by rotating it around a fixed vector. Finally, the leaf mesh is mapped to the deformed venation skeleton, as such the deformation of the mesh follows the deformation of the venation skeleton. The proposed techniques have been applied to simulate plant leaf surface deformation resulted from biological responses of plant wilting.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії