Academic literature on the topic 'Data-driven animations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data-driven animations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data-driven animations"

1

Ge, T., Y. Zhao, B. Lee, D. Ren, B. Chen, and Y. Wang. "Canis: A High‐Level Language for Data‐Driven Chart Animations." Computer Graphics Forum 39, no. 3 (June 2020): 607–17. http://dx.doi.org/10.1111/cgf.14005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Klotsman, Marina, and Ayellet Tal. "Animation of Flocks Flying in Line Formations." Artificial Life 18, no. 1 (December 2011): 91–105. http://dx.doi.org/10.1162/artl_a_00050.

Full text
Abstract:
We provide a biologically motivated technique for modeling and animating bird flocks in flight, which produces plausible and realistic-looking flock animations. While most previous approaches have focused on animating cluster formations, this article introduces a technique for animating flocks that fly in certain patterns, so-called line formations. We distinguish between the behavior of such flocks during initiation and their behavior during steady flight. Our simulation of the initiation stage is rule-based and incorporates an artificial bird model. Our simulation of the steady-flight stage combines a data-driven approach and an energy-savings model.
APA, Harvard, Vancouver, ISO, and other styles
3

M.M.T. Wickramasinghe, M. H. M. Wickramasinghe,. "Impact of using 2D Animation As a Pedagogical Tool." Psychology and Education Journal 58, no. 1 (January 1, 2021): 3435–39. http://dx.doi.org/10.17762/pae.v58i1.1283.

Full text
Abstract:
21st-century knowledge economy driven modern curriculum needs students to perceive complex dimensions of knowledge to be intellectually competent [1]The authors have observed that animation is an excellent way of presenting academics in a less complicated form to students as the concepts can be presented lively and engaging students visually. It has been found out that the platform and the learning atmosphere impact on data mining [2]This study was conducted with the main objective to assess the impact of using 2D animation as an effective teaching tool and to evaluate the most effective learning atmosphere for undergraduate studies. The authors have incorporated a qualitative approach to systematically investigate in-depth the effective use of 2D animation as a teaching tool. The authors have selected 180 business management undergraduate students as the sample of this study.The sample was divided in to two groups where each group comprised of 90 students. One group of students were taught using the 2D animated videos using animated characters relevent to the course module and the other group of students were taught using only the presentation slides created throughMicrosoft power point using only text and images. Through thematic analysis and participant observations, it was found out that there is a direct effect of using animated characters as a teaching tool and it was found out that using 2D animations add more value to the role of a lecturer when delivering through online platforms. This study's findings contribute towards emphasising how effective and innovative teaching techniques can be developed using 2D animations in a classroom environment. Thereby, through positive enhancement of the next generation of leaders' knowledge and attitudes in our country will increase the human intelligence assets in the knowledge-driven economy.
APA, Harvard, Vancouver, ISO, and other styles
4

Schnitzer, Julia. "Generative Design For Creators – The Impact Of Data Driven Visualization And Processing In The Field Of Creative Business." Electronic Imaging 2021, no. 3 (June 18, 2021): 22–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.3.mobmu-022.

Full text
Abstract:
In how far can algorithms take care of your creative work? Generative design is currently changing our conventional understanding of design in its basic principles. For decades, design was a handmade issue and postproduction a job for highly specialized professionals. Generative Design nowadays has become a popular instrument for creating artwork, models and animations with programmed algorithms. By using simple languages such as JavaScript’s p5.js and Processing based on Java, artists and makers can create everything from interactive typography and textiles to 3D-printed products to complex infographics. Computers are not only able to provide images, but also generate variations and templates in a professional quality. Pictures are being pre-optimized, processed and issued by algorithms. The profession of a designers will become more and more that of a director or conductor at the human-computer-interface. What effects does generative design have on the future creative field of designers? To find an answer to this complex field we analyze several examples of projects from a range of international designers and fine arts as well as commercial projects. In an exercise I will guide you step-by-step through a tutorial for creating your own visual experiments that explore possibilities in color, form and images.
APA, Harvard, Vancouver, ISO, and other styles
5

Garcia Fernandez, J., K. Tammi, and A. Joutsiniemi. "EXTENDING THE LIFE OF VIRTUAL HERITAGE: REUSE OF TLS POINT CLOUDS IN SYNTHETIC STEREOSCOPIC SPHERICAL IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 317–23. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-317-2017.

Full text
Abstract:
Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste.How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: <i>Virtual Heritage Dissemination through the production of VR content</i>. <br><br> Driven by the ProDigiOUs project’s guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta’s scans transformed into stereoscopic spherical animations.
APA, Harvard, Vancouver, ISO, and other styles
6

de Vries, Gwyneth, Kevin Roy, and Victoria Chester. "Using Three-Dimensional Gait Data for Foot/Ankle Orthopaedic Surgery." Open Orthopaedics Journal 3, no. 1 (November 3, 2009): 89–95. http://dx.doi.org/10.2174/1874325000903010089.

Full text
Abstract:
We present the case of a forty year old male who sustained a torn carotid during strenuous physical activity. This was followed by a right hemispheric stroke due to a clot associated with the carotid. Upon recovery, the patient’s gait was characterized as hemiparetic with a stiff-knee pattern, a fixed flexion deformity of the toe flexors, and a hindfoot varus. Based on clinical exams and radiographs, the surgical treatment plan was established and consisted of correction of the forefoot deformities, possible hamstrings lengthening, and tendon transfer of the posterior tibial tendon to the dorsolateral foot. To aid in surgical planning, a three-dimensional gait analysis was conducted using a state-of-the-art motion capture system. Data from this analysis provided insight into the pathomechanics of the patient’s gait pattern. A forefoot driven hindfoot varus was evident from the presurgical data and the tendon transfer procedure was deemed unnecessary. A computer was used in the OR to provide surgeons with animations of the patient’s gait and graphical results as needed. A second gait analysis was conducted 6 weeks post surgery, shortly after cast removal. Post-surgical gait data showed improved foot segment orientation and position. Motion capture data provides clinicians with detailed information on the multisegment kinematics of foot motion during gait, before and during surgery. Further, treatment effectiveness can be evaluated by repeating gait analyses after recovery.
APA, Harvard, Vancouver, ISO, and other styles
7

Gholba, N. D., A. Babu, S. Shanmugapriya, A. Singh, A. Srivastava, and S. Saran. "APPLICATION OF VARIOUS OPEN SOURCE VISUALIZATION TOOLS FOR EFFECTIVE MINING OF INFORMATION FROM GEOSPATIAL PETROLEUM DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5 (November 19, 2018): 167–74. http://dx.doi.org/10.5194/isprs-archives-xlii-5-167-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> This study emphasizes the use of various tools for visualizing geospatial data for facilitating information mining of the global petroleum reserves. In this paper, open-source data on global oil trade, from 1996 to 2016, published by British Petroleum was used. It was analysed using the shapefile of the countries of the world in the open-source software like StatPlanet, R and QGIS. Visualizations were created using different maps with combinations of graphics and plots, like choropleth, dot density, graduated symbols, 3D maps, Sankey diagrams, hybrid maps, animations, etc. to depict the global petroleum trade. Certain inferences could be quickly made like, Venezuela and Iran are rapidly rising as the producers of crude oil. The strong-hold is shifting from the Gulf countries since China, Sudan and Kazakhstan have shown a high rate of positive growth in crude reserves. It was seen that the global oil consumption is not driven only by population but by lifestyle also, since Saudi Arabia has a very high rate of per-capita consumption of petroleum, despite very low population. India and China have very limited oil reserves, yet have to cater to a large population. These visualizations help to understand the likely sources of crude and refined petroleum products and to judge the flux in the global oil reserves. The results show that geodata visualization increases the understanding, breaks down the complexity of data and enables the viewer to quickly digest the high volumes of data through visual association.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Long, Yubo Zhang, Zhongding Jiang, Luying Li, Wei Chen, and Qunsheng Peng. "Precomputing data-driven tree animation." Computer Animation and Virtual Worlds 18, no. 4-5 (2007): 371–82. http://dx.doi.org/10.1002/cav.205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Farahani, Navid, Zheng Liu, Dylan Jutt, and Jeffrey L. Fine. "Pathologists' Computer-Assisted Diagnosis: A Mock-up of a Prototype Information System to Facilitate Automation of Pathology Sign-out." Archives of Pathology & Laboratory Medicine 141, no. 10 (July 7, 2017): 1413–20. http://dx.doi.org/10.5858/arpa.2016-0214-oa.

Full text
Abstract:
Context.— Pathologists' computer-assisted diagnosis (pCAD) is a proposed framework for alleviating challenges through the automation of their routine sign-out work. Currently, hypothetical pCAD is based on a triad of advanced image analysis, deep integration with heterogeneous information systems, and a concrete understanding of traditional pathology workflow. Prototyping is an established method for designing complex new computer systems such as pCAD. Objective.— To describe, in detail, a prototype of pCAD for the sign-out of a breast cancer specimen. Design.— Deidentified glass slides and data from breast cancer specimens were used. Slides were digitized into whole-slide images with an Aperio ScanScope XT, and screen captures were created by using vendor-provided software. The advanced workflow prototype was constructed by using PowerPoint software. Results.— We modeled an interactive, computer-assisted workflow: pCAD previews whole-slide images in the context of integrated, disparate data and predefined diagnostic tasks and subtasks. Relevant regions of interest (ROIs) would be automatically identified and triaged by the computer. A pathologist's sign-out work would consist of an interactive review of important ROIs, driven by required diagnostic tasks. The interactive session would generate a pathology report automatically. Conclusions.— Using animations and real ROIs, the pCAD prototype demonstrates the hypothetical sign-out in a stepwise fashion, illustrating various interactions and explaining how steps can be automated. The file is publicly available and should be widely compatible. This mock-up is intended to spur discussion and to help usher in the next era of digitization for pathologists by providing desperately needed and long-awaited automation.
APA, Harvard, Vancouver, ISO, and other styles
10

주은정, Jehee Lee, and Sohmin Ahn. "Data-driven Facial Animation Using Sketch Interface." Journal of the Korea Computer Graphics Society 13, no. 3 (September 2007): 11–18. http://dx.doi.org/10.15701/kcgs.2007.13.3.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data-driven animations"

1

Rowe, Daniel Taylor. "Using Graphics, Animations, and Data-Driven Animations to Teach the Principles of Simple Linear Regression to Graduate Students." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/6.

Full text
Abstract:
This report describes the design, development, and evaluation of the Simple Linear Regression Lesson (SLRL), a web-based lesson that uses visual strategies to teach graduate students the principles of simple linear regression. The report includes a literature review on the use of graphics, animations, and data-driven animations in statistics pedagogy and instruction in general. The literature review also summarizes the pertinent instructional design and development theories that informed the creation of the lesson. Following the literature review is a description the SLRL and the methodologies used to develop it. The evaluation section of the report details the methods used during the formative and summative evaluation stages, including results from a small-group implementation of the SLRL. The report concludes with a review of the product's strengths and weaknesses and the process' strengths and weaknesses.
APA, Harvard, Vancouver, ISO, and other styles
2

Mousas, Christos. "Data-driven techniques for animating virtual characters." Thesis, University of Sussex, 2015. http://sro.sussex.ac.uk/id/eprint/52967/.

Full text
Abstract:
One of the key goals of current research in data-driven computer animation is the synthesis of new motion sequences from existing motion data. This thesis presents three novel techniques for synthesising the motion of a virtual character from existing motion data and develops a framework of solutions to key character animation problems. The first motion synthesis technique presented is based on the character's locomotion composition process. This technique examines the ability of synthesising a variety of character's locomotion behaviours while easily specified constraints (footprints) are placed in the three-dimensional space. This is achieved by analysing existing motion data, and by assigning the locomotion behaviour transition process to transition graphs that are responsible for providing information about this process. However, virtual characters should also be able to animate according to different style variations. Therefore, a second technique to synthesise real-time style variations of character's motion. A novel technique is developed that uses correlation between two different motion styles, and by assigning the motion synthesis process to a parameterised maximum a posteriori (MAP) framework retrieves the desire style content of the input motion in real-time, enhancing the realism of the new synthesised motion sequence. The third technique presents the ability to synthesise the motion of the character's fingers either o↵-line or in real-time during the performance capture process. The advantage of both techniques is their ability to assign the motion searching process to motion features. The presented technique is able to estimate and synthesise a valid motion of the character's fingers, enhancing the realism of the input motion. To conclude, this thesis demonstrates that these three novel techniques combine in to a framework that enables the realistic synthesis of virtual character movements, eliminating the post processing, as well as enabling fast synthesis of the required motion.
APA, Harvard, Vancouver, ISO, and other styles
3

Scheidt, November. "A facial animation driven by X-ray microbeam data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0021/MQ54745.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yin, KangKang. "Data-driven kinematic and dynamic models for character animation." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31759.

Full text
Abstract:
Human motion plays a key role in the production of films, video games, virtual reality applications, and the control of humanoid robots. Unfortunately, it is hard to generate high quality human motion for character animation either manually or algorithmically. As a result, approaches based on motion capture data have become a central focus of character animation research in recent years. We observe three principal weaknesses in previous work using data-driven approaches for modelling human motion. First, basic balance behaviours and locomotion tasks are currently not well modelled. Second, the ability to produce high quality motion that is responsive to its environment is limited. Third, knowledge about human motor control is not well utilized. This thesis develops several techniques to generalize motion capture character animations to balance and respond. We focus on balance and locomotion tasks, with an emphasis on responding to disturbances, user interaction, and motor control integration. For this purpose, we investigate both kinematic and dynamic models. Kinematic models are intuitive and fast to construct, but have narrow generality, and thus require more data. A novel performance-driven animation interface to a motion database is developed, which allows a user to use foot pressure to control an avatar to balance in place, punch, kick, and step. We also present a virtual avatar that can respond to pushes, with the aid of a motion database of push responses. Consideration is given to dynamics using motion selection and adaption. Dynamic modelling using forward dynamics simulations requires solving difficult problems related to motor control, but permits wider generalization from given motion data. We first present a simple neuromuscular model that decomposes joint torques into feedforward and low-gain feedback components, and can deal with small perturbations that are assumed not to affect balance. To cope with large perturbations we develop explicit balance recovery strategies for a standing character that is pushed in any direction. Lastly, we present a simple continuous balance feedback mechanism that enables the control of a large variety of locomotion gaits for bipeds. Different locomotion tasks, including walking, running, and skipping, are constructed either manually or from motion capture examples. Feedforward torques can be learned from the feedback components, emulating a biological motor learning process that leads to more stable and natural motions with low gains. The results of this thesis demonstrate the potential of a new generation of more sophisticated kinematic and dynamic models of human motion.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Naert, Lucie. "Capture, annotation and synthesis of motions for the data-driven animation of sign language avatars." Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS561.

Full text
Abstract:
Cette thèse porte sur la capture, l'annotation, la synthèse et l'évaluation des mouvements des mains et des bras pour l'animation d'avatars communiquant en Langues des Signes (LS). Actuellement, la production et la diffusion de messages en LS dépendent souvent d'enregistrements vidéo qui manquent d'informations de profondeur et dont l’édition et l'analyse sont difficiles. Les avatars signeurs constituent une alternative prometteuse à la vidéo. Ils sont généralement animés soit à l'aide de techniques procédurales, soit par des techniques basées données. L'animation procédurale donne souvent lieu à des mouvements peu naturels, mais n'importe quel signe peut être produit avec précision. Avec l’animation basée données, les mouvements de l'avatar sont réalistes mais la variété des signes pouvant être synthétisés est limitée et/ou biaisée par la base de données initiale. Privilégiant l’acceptation de l’avatar, nous avons choisi l'approche basée sur les données mais, pour remédier à sa principale limitation, nous proposons d'utiliser les mouvements annotés présents dans une base de mouvements de LS capturés pour synthétiser de nouveaux signes et énoncés absents de cette base. Pour atteindre cet objectif, notre première contribution est la conception, l'enregistrement et l'évaluation perceptuelle d'une base de données de capture de mouvements en Langue des Signes Française (LSF) composée de signes et d'énoncés réalisés par des enseignants sourds de LSF. Notre deuxième contribution est le développement de techniques d'annotation automatique pour différentes pistes d’annotation basées sur l'analyse des propriétés cinématiques de certaines articulations et des algorithmes d'apprentissage automatique existants. Notre dernière contribution est la mise en œuvre de différentes techniques de synthèse de mouvements basées sur la récupération de mouvements par composant phonologique et sur la reconstruction modulaire de nouveaux contenus de LSF avec l'utilisation de techniques de génération de mouvements, comme la cinématique inverse, paramétrées pour se conformer aux propriétés des mouvements réels
This thesis deals with the capture, annotation, synthesis and evaluation of arm and hand motions for the animation of avatars communicating in Sign Languages (SL). Currently, the production and dissemination of SL messages often depend on video recordings which lack depth information and for which editing and analysis are complex issues. Signing avatars constitute a powerful alternative to video. They are generally animated using either procedural or data-driven techniques. Procedural animation often results in robotic and unrealistic motions, but any sign can be precisely produced. With data-driven animation, the avatar's motions are realistic but the variety of the signs that can be synthesized is limited and/or biased by the initial database. As we considered the acceptance of the avatar to be a prime issue, we selected the data-driven approach but, to address its main limitation, we propose to use annotated motions present in an SL Motion Capture database to synthesize novel SL signs and utterances absent from this initial database. To achieve this goal, our first contribution is the design, recording and perceptual evaluation of a French Sign Language (LSF) Motion Capture database composed of signs and utterances performed by deaf LSF teachers. Our second contribution is the development of automatic annotation techniques for different tracks based on the analysis of the kinematic properties of specific joints and existing machine learning algorithms. Our last contribution is the implementation of different motion synthesis techniques based on motion retrieval per phonological component and on the modular reconstruction of new SL content with the additional use of motion generation techniques such as inverse kinematics, parameterized to comply to the properties of real motions
APA, Harvard, Vancouver, ISO, and other styles
6

Trutoiu, Laura. "Perceptually Valid Dynamics for Smiles and Blinks." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/428.

Full text
Abstract:
In many applications, such as conversational agents, virtual reality, movies, and games, animated facial expressions of computer-generated (CG) characters are used to communicate, teach, or entertain. With an increased demand for CG characters, it is important to animate accurate, realistic facial expressions because human facial expressions communicate a wealth of information. However, realistically animating faces is challenging and time-consuming for two reasons. First, human observers are adept at detecting anomalies in realistic CG facial animations. Second, traditional animation techniques based on keyframing sometimes approximate the dynamics of facial expressions or require extensive artistic input while high-resolution performance capture techniques are cost prohibitive. In this thesis, we develop a framework to explore representations of two key facial expressions, blinks and smiles, and we show that data-driven models are needed to realistically animate these expressions. Our approach relies on utilizing high-resolution performance capture data to build models that can be used in traditional keyframing systems. First, we record large collections of high-resolution dynamic expressions through video and motion capture technology. Next, we build expression-specific models of the dynamic data properties of blinks and smiles. We explore variants of the model and assess whether viewers perceive the models as more natural than the simplified models present in the literature. In the first part of the thesis, we build a generative model of the characteristic dynamics of blinks: fast closing of the eyelids followed by a slow opening. Blinks have a characteristic profile with relatively little variation across instances or people. Our results demonstrate the need for an accurate model of eye blink dynamics rather than simple approximations, as viewers perceive the difference. In the second part of the thesis, we investigate how spatial and temporal linearities impact smile genuineness and build a model for genuine smiles. Our perceptual results indicate that a smile model needs to preserve temporal information. With this model, we synthesize perceptually genuine smiles that outperform traditional animation methods accompanied by plausible head motions. In the last part of the thesis, we investigate how blinks synchronize with the start and end of spontaneous smiles. Our analysis shows that eye blinks correlate with the end of the smile and occur before the lip corners stop moving downwards. We argue that the timing of blinks relative to smiles is useful in creating compelling facial expressions. Our work is directly applicable to current methods in animation. For example, we illustrate how our models can be used in the popular framework of blendshape animation to increase realism while keeping the system complexity low. Furthermore, our perceptual results can inform the design of realistic animation systems by highlighting common assumptions that over-simplify the dynamics of expressions.
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Pai-chun, and 張百群. "Data-Driven Water Flow Animation In Oriental Paintings." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/23327560520382105673.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
102
This work designs a data-driven system which takes an oriental painting as its input to analyze the structure, placement density, and ink density of strokes, generate a smooth flow field based on the extracted flow pattern, and places and animates strokes by choosing proper strokes from the extracted strokes according to the flow field in virtual world scene. The main contributions lie in the integration of a datadriven method which extracts the flow pattern and stroke style from an oriental painting and physical simulation for the creation of an oriental painting flow animation. The physical simulation using the Naiver-Stokes equation to create static flow field and wave equations to simulate the disturbance among dynamic object and the steady water flow. The strokes is constructed and animated using the strokes extracted from the painting according to the flow filed.
APA, Harvard, Vancouver, ISO, and other styles
8

Abson, Karl, and Ian J. Palmer. "Motion capture: capturing interaction between human and animal." 2015. http://hdl.handle.net/10454/9106.

Full text
Abstract:
No
We introduce a new "marker-based" model for use in capturing equine movement. This model is informed by a sound biomechanical study of the animal and can be deployed in the pursuit of many undertakings. Unlike many other approaches, our method provides a high level of automation and hides the intricate biomechanical knowledge required to produce realistic results. Due to this approach, it is possible to acquire solved data with minimal manual intervention even in real-time conditions. The approach introduced can be replicated for the production of many other animals. The model is first informed by the veterinary world through studies of the subject's anatomy. Second, further medical studies aimed at understanding and addressing surface processes, inform model creation. The latter studies address items such as skin sliding. If not otherwise corrected these processes may hinder marker based capture. The resultant model has been tested in feasibility studies for practicality and subject acceptance during production. Data is provided for scrutiny along with the subject digitally captured through a variety of methods. The digital subject in mesh form as well as the motion capture model aid in comparison and show the level of accurateness achieved. The video reference and digital renders provide an insight into the level of realism achieved.
APA, Harvard, Vancouver, ISO, and other styles
9

Chaudhry, E., S. J. Bian, Hassan Ugail, X. Jin, L. H. You, and J. J. Zhang. "Dynamic skin deformation using finite difference solutions for character animation." 2014. http://hdl.handle.net/10454/8163.

Full text
Abstract:
No
We present a new skin deformation method to create dynamic skin deformations in this paper. The core elements of our approach are a dynamic deformation model, an efficient data-driven finite difference solution, and a curve-based representation of 3D models.We first reconstruct skin deformation models at different poses from the taken photos of a male human arm movement to achieve real deformed skin shapes. Then, we extract curves from these reconstructed skin deformation models. A new dynamic deformation model is proposed to describe physics of dynamic curve deformations, and its finite difference solution is developed to determine shape changes of the extracted curves. In order to improve visual realism of skin deformations, we employ data-driven methods and introduce skin shapes at the initial and final poses in to our proposed dynamic deformation model. Experimental examples and comparisons made in this paper indicate that our proposed dynamic skin deformation technique can create realistic deformed skin shapes efficiently with a small data size.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data-driven animations"

1

Deng, Zhigang, and Ulrich Neumann, eds. Data-Driven 3D Facial Animation. London: Springer London, 2007. http://dx.doi.org/10.1007/978-1-84628-907-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

(Editor), Zhigang Deng, and Ulrich Neumann (Editor), eds. Data-Driven 3D Facial Animation. Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data-driven animations"

1

Courty, Nicolas, and Thomas Corpetti. "Data-Driven Animation of Crowds." In Computer Vision/Computer Graphics Collaboration Techniques, 377–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-71457-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz, and Daniel Holden. "Data-Driven Character Animation Synthesis." In Handbook of Human Motion, 1–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_10-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jörg, Sophie. "Data-Driven Hand Animation Synthesis." In Handbook of Human Motion, 1–13. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_13-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz, and Daniel Holden. "Data-Driven Character Animation Synthesis." In Handbook of Human Motion, 2003–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jörg, Sophie. "Data-Driven Hand Animation Synthesis." In Handbook of Human Motion, 2079–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liang, Yuan, Song-Hai Zhang, and Ralph Robert Martin. "Automatic Data-Driven Room Design Generation." In Next Generation Computer Animation Techniques, 133–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69487-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sourina, Olga, Alexei Sourin, and Vladimir Kulish. "EEG Data Driven Animation and Its Application." In Computer Vision/Computer Graphics CollaborationTechniques, 380–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01811-4_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gamage, Vihanga, Cathy Ennis, and Robert Ross. "Latent Dynamics for Artefact-Free Character Animation via Data-Driven Reinforcement Learning." In Lecture Notes in Computer Science, 675–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86380-7_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vogt, David, Steve Grehl, Erik Berger, Heni Ben Amor, and Bernhard Jung. "A Data-Driven Method for Real-Time Character Animation in Human-Agent Interaction." In Intelligent Virtual Agents, 463–76. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09767-1_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data-driven animations"

1

Ge, Tong, Bongshin Lee, and Yunhai Wang. "CAST: Authoring Data-Driven Chart Animations." In CHI '21: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3411764.3445452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

White, Ryan, Keenan Crane, and D. A. Forsyth. "Data driven cloth animation." In ACM SIGGRAPH 2007 sketches. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1278780.1278825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Jehee. "Introduction to data-driven animation." In ACM SIGGRAPH ASIA 2010 Courses. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1900520.1900524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grover, Divyanshu, and Parag Chaudhuri. "Data-driven 2D effects animation." In the Tenth Indian Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3009977.3010000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hongchuan, Taku Komura, and Jian J. Zhang. "Data-driven animation technology (D2AT)." In SA '17: SIGGRAPH Asia 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3154457.3154458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Xin, Wanchao Su, Jian Deng, and Zhigeng Pan. "Real traffic data-driven animation simulation." In VRCAI '15: International Conference on Virtual Reality Continuum and Its Applications in Industry. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2817675.2817683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xinyi, and Michiel van de Panne. "Data-driven autocompletion for keyframe animation." In MIG '18: Motion, Interaction and Games. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3274247.3274502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brandt, Sascha, Matthias Fischer, Maria Gerges, Claudius Jähn, and Jan Berssenbrügge. "Automatic Derivation of Geometric Properties of Components From 3D Polygon Models." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67528.

Full text
Abstract:
To detect errors or find potential for improvement during the CAD-supported development of a complex technical system like modern industrial machines, the system’s virtual prototype can be examined in virtual reality (VR) in the context of virtual design reviews. Besides exploring the static shape of the examined system, observing the machines’ mechanics (e.g., motor-driven mechanisms) and transport routes for the material transport (e.g., via conveyor belts or chains, or rail-based transport systems) can play an equally important role in such a review. In practice it is often the case, that the relevant information about transport routes, or kinematic properties is either not consequently modeled in the CAD data or is lost during conversion processes. To significantly reduce the manual effort and costs for creating animations of the machines complex behavior with such limited input data for a design review, we present a set of algorithms to automatically determine geometrical properties of machine parts based only on their triangulated surfaces. The algorithms allow to detect the course of transport systems, the orientation of objects in 3d space, rotation axes of cylindrical objects and holes, the number of tooth of gears, as well as the tooth spacing of toothed racks. We implemented the algorithms in the VR system PADrend and applied them to animate virtual prototypes of real machines.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xi, Jun Yu, Fei Gao, and Jian Zhang. "Data-driven facial animation via hypergraph learning." In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2016. http://dx.doi.org/10.1109/smc.2016.7844280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hamer, Henning, Juergen Gall, Raquel Urtasun, and Luc Van Gool. "Data-driven animation of hand-object interactions." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography