To see the other types of publications on this topic, follow the link: Motion capture (MoCap).

Journal articles on the topic 'Motion capture (MoCap)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Motion capture (MoCap).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yahya, Muhammad, Jawad Ali Shah, Kushsairy Abdul Kadir, Zulkhairi M. Yusof, Sheroz Khan, and Arif Warsi. "Motion capture sensing techniques used in human upper limb motion: a review." Sensor Review 39, no. 4 (July 15, 2019): 504–11. http://dx.doi.org/10.1108/sr-10-2018-0270.

Full text
Abstract:
Purpose Motion capture system (MoCap) has been used in measuring the human body segments in several applications including film special effects, health care, outer-space and under-water navigation systems, sea-water exploration pursuits, human machine interaction and learning software to help teachers of sign language. The purpose of this paper is to help the researchers to select specific MoCap system for various applications and the development of new algorithms related to upper limb motion. Design/methodology/approach This paper provides an overview of different sensors used in MoCap and techniques used for estimating human upper limb motion. Findings The existing MoCaps suffer from several issues depending on the type of MoCap used. These issues include drifting and placement of Inertial sensors, occlusion and jitters in Kinect, noise in electromyography signals and the requirement of a well-structured, calibrated environment and time-consuming task of placing markers in multiple camera systems. Originality/value This paper outlines the issues and challenges in MoCaps for measuring human upper limb motion and provides an overview on the techniques to overcome these issues and challenges.
APA, Harvard, Vancouver, ISO, and other styles
2

Estévez-García, Román, Jorge Martín-Gutiérrez, Saúl Menéndez Mendoza, Jonathan Rodríguez Marante, Pablo Chinea-Martín, Ovidia Soto-Martín, and Moisés Lodeiro-Santiago. "Open Data Motion Capture: MOCAP-ULL Database." Procedia Computer Science 75 (2015): 316–26. http://dx.doi.org/10.1016/j.procs.2015.12.253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Menolotto, Matteo, Dimitrios-Sokratis Komaris, Salvatore Tedesco, Brendan O’Flynn, and Michael Walsh. "Motion Capture Technology in Industrial Applications: A Systematic Review." Sensors 20, no. 19 (October 5, 2020): 5687. http://dx.doi.org/10.3390/s20195687.

Full text
Abstract:
The rapid technological advancements of Industry 4.0 have opened up new vectors for novel industrial processes that require advanced sensing solutions for their realization. Motion capture (MoCap) sensors, such as visual cameras and inertial measurement units (IMUs), are frequently adopted in industrial settings to support solutions in robotics, additive manufacturing, teleworking and human safety. This review synthesizes and evaluates studies investigating the use of MoCap technologies in industry-related research. A search was performed in the Embase, Scopus, Web of Science and Google Scholar. Only studies in English, from 2015 onwards, on primary and secondary industrial applications were considered. The quality of the articles was appraised with the AXIS tool. Studies were categorized based on type of used sensors, beneficiary industry sector, and type of application. Study characteristics, key methods and findings were also summarized. In total, 1682 records were identified, and 59 were included in this review. Twenty-one and 38 studies were assessed as being prone to medium and low risks of bias, respectively. Camera-based sensors and IMUs were used in 40% and 70% of the studies, respectively. Construction (30.5%), robotics (15.3%) and automotive (10.2%) were the most researched industry sectors, whilst health and safety (64.4%) and the improvement of industrial processes or products (17%) were the most targeted applications. Inertial sensors were the first choice for industrial MoCap applications. Camera-based MoCap systems performed better in robotic applications, but camera obstructions caused by workers and machinery was the most challenging issue. Advancements in machine learning algorithms have been shown to increase the capabilities of MoCap systems in applications such as activity and fatigue detection as well as tool condition monitoring and object recognition.
APA, Harvard, Vancouver, ISO, and other styles
4

Prim, Gabriel De Souza, Berenice Santos Gonçalves, and Milton Luiz Horn Vieira. "A representação do corpo e do movimento: uma análise da interatividade do motion capture." Design e Tecnologia 5, no. 09 (July 2, 2015): 23. http://dx.doi.org/10.23972/det2015iss09pp23-28.

Full text
Abstract:
O Motion Capture (MoCap) é um sistema capaz de medir o deslocamento do ser humano. A tecnologia de captura de movimentos pode ser utilizada em produções de entretenimento digital, como os games e as animações 3D. Neste trabalho, busca-se sistematizar os níveis de interatividade possibilitadas a partir do equipamento óptico de captura de movimentos, analisando as variáveis descritas por Steuer (1992) e as definições de interatividade descritas por Lévy (2000), discutindo a interação do homem com o MoCap em termos de possibilidades de apropriação e de personalização, virtualidade, a implicação da imagem do participante e da telepresença. Os resultados permitem uma reflexão sobre o potencial de interatividade do sistema e concluindo com observações do alto potencial de interatividade do equipamento de Motion Capture.
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Vanessa. "Catching the ghost: the digital gaze of motion capture." Journal of Visual Culture 18, no. 3 (December 2019): 305–26. http://dx.doi.org/10.1177/1470412919841022.

Full text
Abstract:
Created with digital motion capture, or mocap, the virtual dances Ghostcatching and as.phyx.ia render movement abstracted from choreographic bodies. These depictions of gestural doubles or ‘ghosts’ trigger a sense of the uncanny rooted in mocap’s digital processes. Examining these material processes, this article argues that this digital optical uncanny precipitates from the intersubjective relationship of performer, technology, and spectator. Mocap interpolates living bodies into a technologized visual field that parses these bodies as dynamic data sets, a process by which performing bodies and digital capture technologies coalesce into the film’s virtual body. This virtual body signals a computational agency at its heart, one that choreographs the intersubjective embodiments of real and virtual dancers, and spectators. Destabilizing the human body as a locus of perception, movement, and sensation, mocap triggers uncanny uncertainty in human volition. In this way, Ghostcatching and as.phyx.ia reflect the infiltration of computer vision technologies, such as facial recognition, into numerous aspects of contemporary life. Through these works, the author hopes to show how the digital gaze of these algorithms, imperceptible to the human eye, threatens individual autonomy with automation.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xiaoting, Xiangxu Meng, Chenglei Yang, and Junqing Zhang. "Data Driven Avatars Roaming in Digital Museum." International Journal of Virtual Reality 8, no. 3 (January 1, 2009): 13–18. http://dx.doi.org/10.20870/ijvr.2009.8.3.2736.

Full text
Abstract:
This paper describes a motion capture (mocap) data-driven digital museum roaming system with high walking reality. We focus on three main questions: the animation of avatars; the path planning; and the collision detection among avatars. We use only a few walking clips from mocap data to synthesize walking motions with natural transitions, any direction and any length. Let the avatars roam in the digital museum with its Voronoi skeleton path, shortest path or offset path. And also we use Voronoi diagram to do collision detection. Different users can set up their own avatars and roam along their own path. We modify the motion graph method by classify the original mocap data and set up their motion graph which can improve search efficiency greatly.
APA, Harvard, Vancouver, ISO, and other styles
7

Maruyama, Tsubasa, Mitsunori Tada, and Haruki Toda. "Riding Motion Capture System Using Inertial Measurement Units with Contact Constraints." International Journal of Automation Technology 13, no. 4 (July 5, 2019): 506–16. http://dx.doi.org/10.20965/ijat.2019.p0506.

Full text
Abstract:
The measurement of human motion is an important aspect of ergonomic mobility design, in which the mobility product is evaluated based on human factors obtained by digital human (DH) technologies. The optical motion-capture (MoCap) system has been widely used for measuring human motion in laboratories. However, it is generally difficult to measure human motion using mobility products in real-world scenarios, e.g., riding a bicycle on an outdoor slope, owing to unstable lighting conditions and camera arrangements. On the other hand, the inertial-measurement-unit (IMU)-based MoCap system does not require any optical devices, providing the potential for measuring riding motion even in outdoor environments. However, in general, the estimated motion is not necessarily accurate as there are many errors due to the nature of the IMU itself, such as drift and calibration errors. Thus, it is infeasible to apply the IMU-based system to riding motion estimation. In this study, we develop a new riding MoCap system using IMUs. The proposed system estimates product and human riding motions by combining the IMU orientation with contact constraints between the product and DH, e.g., DH hands in contact with handles. The proposed system is demonstrated with a bicycle ergometer, including the handles, seat, backrest, and foot pedals, as in general mobility products. The proposed system is further validated by comparing the estimated joint angles and positions with those of the optical MoCap for three different subjects. The experiment reveals both the effectiveness and limitations of the proposed system. It is confirmed that the proposed system improves the joint position estimation accuracy compared with a system using only IMUs. The angle estimation accuracy is also improved for near joints. However, it is observed that the angle accuracy decreases for a few joints. This is explained by the fact that the proposed system modifies the orientations of all body segments to satisfy the contact constraints, even if the orientations of a few joints are correct. This further confirms that the elapsed time using the proposed system is sufficient for real-time application.
APA, Harvard, Vancouver, ISO, and other styles
8

Lv, Dong Yue, Zhi Pei Huang, Li Xin Sun, Neng Hai Yu, and Jian Kang Wu. "Model-Based Golf Swing Reconstruction." Applied Mechanics and Materials 530-531 (February 2014): 919–27. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.919.

Full text
Abstract:
To increase the efficiency of golf training, 3D swing reconstruction is broadly used among golf researchers. Traditional reconstruction methods apply motion capture system (MOCAP) to gain golfers motion data and drive bio-mechanical model directly. The cost of MOCAP system restricts the application area of golf research and the reconstruction quality of swing relies on the accuracy of the motion data. We introduced the dynamical analysis into swing reconstruction and proposed a Dynamic Bayesian Network (DBN) model with Kinect to capture the swing motion. Our model focused on modeling the bio-mechanical and dynamical relationships between key joints of golfer during swing. The positions of key joints were updated by the model and were used as motion data to reconstruct golf swing. Experimental results show that our results are comparable with the ones acquired by optical MOCAP system in accuracy and can reconstruct the golf swing with much lower cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Mei, Feng, Qian Hu, Changxuan Yang, and Lingfeng Liu. "ARMA-Based Segmentation of Human Limb Motion Sequences." Sensors 21, no. 16 (August 19, 2021): 5577. http://dx.doi.org/10.3390/s21165577.

Full text
Abstract:
With the development of human motion capture (MoCap) equipment and motion analysis technologies, MoCap systems have been widely applied in many fields, including biomedicine, computer vision, virtual reality, etc. With the rapid increase in MoCap data collection in different scenarios and applications, effective segmentation of MoCap data is becoming a crucial issue for further human motion posture and behavior analysis, which requires both robustness and computation efficiency in the algorithm design. In this paper, we propose an unsupervised segmentation algorithm based on limb-bone partition angle body structural representation and autoregressive moving average (ARMA) model fitting. The collected MoCap data were converted into the angle sequence formed by the human limb-bone partition segment and the central spine segment. The limb angle sequences are matched by the ARMA model, and the segmentation points of the limb angle sequences are distinguished by analyzing the good of fitness of the ARMA model. A medial filtering algorithm is proposed to ensemble the segmentation results from individual limb motion sequences. A set of MoCap measurements were also conducted to evaluate the algorithm including typical body motions collected from subjects of different heights, and were labeled by manual segmentation. The proposed algorithm is compared with the principle component analysis (PCA), K-means clustering algorithm (K-means), and back propagation (BP) neural-network-based segmentation algorithms, which shows higher segmentation accuracy due to a more semantic description of human motions by limb-bone partition angles. The results highlight the efficiency and performance of the proposed algorithm, and reveals the potentials of this segmentation model on analyzing inter- and intra-motion sequence distinguishing.
APA, Harvard, Vancouver, ISO, and other styles
10

Delbridge, Matt. "The costume of MoCap: A spatial collision of velcro, avatar and Oskar Schlemmer." Scene 2, no. 1 (October 1, 2014): 221–32. http://dx.doi.org/10.1386/scene.2.1-2.221_1.

Full text
Abstract:
The rationale that governs motion of the organic in the cubical leans towards a transformation of the body in space, emphasizes its mathematical properties and highlights the potential to measure and plot movement – this is the work of a Motion Capture (MoCap) system. The translation in the MoCap studio from physical to virtual is facilitated by the MoCap suit, a device that determines the abstract cubical representation that drives first the neutral, and then the characterized avatar in screen space. The enabling nature of the suit, as apparatus, is a spatial phenomenon informed by Schlemmer’s abstract ‘native’ costume and his vision of the Tanzermensch as the most appropriate form to occupy cubical space. The MoCap suit is similarly native. It bridges the physical and virtual, provides a Victor Turner like threshold and connection between environments, enacting a spatial discourse facilitated by costume. This collision of Velcro, Avatar and Oskar Schlemmer allows a performance of space, binding historical modernity to contemporary practice. This performance of activated space is captured by a costume that endures, in Dorita Hannah’s words, despite the human form.
APA, Harvard, Vancouver, ISO, and other styles
11

Wei, Xiaopeng, Boxiang Xiao, and Qiang Zhang. "A retrieval method for human Mocap data based on biomimetic pattern recognition." Computer Science and Information Systems 7, no. 1 (2010): 99–109. http://dx.doi.org/10.2298/csis1001099w.

Full text
Abstract:
A retrieval method for human Mocap (Motion Capture) data based on biomimetic pattern recognition is presented in this paper. BVH rotation channels are extracted as features of motion for both the retrieval instance and the motion data. Several hyper sausage neurons are constructed according to the retrieval instance, and the trained domain covered by these hyper sausage neurons can be considered as the distribution range of a same kind of motions. By use of CMU free motion database, the retrieval algorithm has been implemented and examined, and the experimental results are illustrated. At the same time, the main contributions and limitations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Mohd Rizhan Wan Idris, Wan, Ahmad Rafi, Azman Bidin, and Azrul Amri Jamal. "A theoretical framework of extrinsic feedback based-automated evaluation system for martial arts." International Journal of Engineering & Technology 7, no. 2.14 (April 6, 2018): 74. http://dx.doi.org/10.14419/ijet.v7i2.14.11160.

Full text
Abstract:
Martial arts (MAs) are considered as a preserved heritage primarily due to the fact that it promotes certain level of identities of a culture. MA refers to the art of combat and self-defense which normally combines offensive and defensive techniques. Technology advancements have made motion capture (MoCap) to be widely used in MA to capture and evaluate human performance. Nevertheless, researches on extrinsic feedbacks (EFs) of MA through the developed evaluation system are scarce. Furthermore, there is no complete framework of evaluation system suggested for MA. This paper presents the theoretical framework of EF-based automated evaluation system in the context of traditional local MA. The framework contains three modules including MoCap, recognition and evaluation. The MoCap module tracks human body accurately in order to generate skeleton, tune focused target, and record human movements. Recognition module develops a script of motion for templates and classification purposes using Reverse-Gesture Description Language (R-GDL) and GDL respectively. Evaluation module produces the extrinsic feedback in terms of pattern and score for the performed movements. This theoretical framework will be used in the development of the digital tool to measure the accuracy and effectiveness of motions performed by one of the traditional local MAs.
APA, Harvard, Vancouver, ISO, and other styles
13

Kitzig, Andreas, Julia Demmer, Tobias Bolten, Edwin Naroska, Gudrun Stockmanns, Reinhard Viga, and Anton Grabmaier. "An HMM-based averaging approach for creating mean motion data from a full-body Motion Capture system to support the development of a biomechanical model." Current Directions in Biomedical Engineering 4, no. 1 (September 1, 2018): 389–93. http://dx.doi.org/10.1515/cdbme-2018-0093.

Full text
Abstract:
AbstractMotion capture systems or MoCap systems are used for game development and in the field of sports for the assessment and digitalization of human movement. Furthermore, MoCap systems are also used in the medical and therapeutic field for the analysis of human movement patterns. As examples gait analysis or examination of the musculoskeletal system and its function should be mentioned. Most application relate to a specific person and their movement or to the comparison of movements of different people. Within the scope of this paper an averaged motion sequence is supposed to be generated from MoCap data in order to be able to use it in the field of biomechanical modeling and simulation. For the averaging of individual movement sequences of different persons a Hidden Markov Model (HMM) based approach is presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Masiero, A., F. Fissore, R. Antonello, A. Cenedese, and A. Vettore. "A COMPARISON OF UWB AND MOTION CAPTURE UAV INDOOR POSITIONING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1695–99. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1695-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The number of applications involving unmanned aerial vehicles (UAVs) grew dramatically during the last decade. Despite such incredible success, the use of drones is still quite limited in GNSS denied environment: indeed, the availability of a reliable GNSS estimates of the drone position is still fundamental in order to enable most of the UAV applications. Given such motivations, in this paper an alternative positioning system for UAVs, based on low cost ultra-wideband band (UWB) is considered. More specifically, this work aims at assessing the positioning accuracy of UWB-based positioning thanks to the comparison with positions provided by a motion capture (MoCap) system. Since the MoCap accuracy is much higher than that of the UWB system, it can be safely used as a reference trajectory for the validation of UWB estimates. In the considered experiment the UWB system allowed to obtain a root mean square error of 39.4&amp;thinsp;cm in 3D positioning based on the use of an adaptive extended Kalman filter, where the measurement noise covariance was adaptively estimated.</p>
APA, Harvard, Vancouver, ISO, and other styles
15

Shchehelska, Yu. "СИСТЕМИ ЗАХОПЛЕННЯ РУХУ В ДОДАНІЙ РЕАЛЬНОСТІ: РІЗНОВИДИ ТА СПЕЦИФІКА ЇХ ЗАСТОСУВАННЯ У ПРАКТИЦІ ПРОМОЦІЙНИХ КОМУНІКАЦІЙ." State and Regions. Series: Social Communications, no. 1(41) (March 10, 2020): 128. http://dx.doi.org/10.32840/cpu2219-8741/2020.1(41).20.

Full text
Abstract:
<div><p><em>In this study there were identified the main varieties of existing motion capture systems (mocap) that can be used primarily to create three-dimensional animation for augmented reality; as well as established their specific features, and also demonstrated the examples of the practical use of certain types of such systems in promotional communications.</em></p></div><p><em>This study unleashes the specificity of the functioning of non-marker and all types of marker motion capture systems – optical (optically passive and optically active, including «performance capture» as well as hybrid) and non-optical (acoustic, magnetic, mechanical and inertial).</em></p><p><em>There were analyzed two practical promotional cases: the American social PR project «Love Has No Labels» and the Japanese commercial brand «ZozoTown» («ZozoSuit»).</em></p><p><em>It has been found that in the practice of promotional communications inertial-type mocap systems with full magnetic interference are most actively used, since they can be used directly during mass AR-actions, primarily due to their portability and ability to function in a limited space.</em></p><p><em>It has also been revealed that AR-actions using motion capture systems are conducted primarily to create positive WOM and media resonances, allowing to significantly diversify the arsenal of communication tools with the target audience, as well as to increase the quality and efficiency of promotional messages, which in sum boosts the publicity capital.</em></p><p><em>Other varieties of mocap systems (with exception of non-marking one, which works through computer vision) are not used in real time regime for promotional events primarily due to their cumbersome nature. However, they can be employed to create realistic 3D animation for future utilization in promotional campaigns, projects, and actions using augmented reality technologies.</em></p><p><strong><em>Key words:</em></strong><em> motion capture systems (mocap), augmented reality (AR), promotion, empirical marketing.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
16

Park, Sangheon, and Sukhoon Yoon. "Validity Evaluation of an Inertial Measurement Unit (IMU) in Gait Analysis Using Statistical Parametric Mapping (SPM)." Sensors 21, no. 11 (May 25, 2021): 3667. http://dx.doi.org/10.3390/s21113667.

Full text
Abstract:
Inertial measurement units (IMUs) are possible alternatives to motion-capture systems (Mocap) for gait analysis. However, IMU-based system performance must be validated before widespread clinical use. Therefore, this study evaluated the validity of IMUs using statistical parametric mapping (SPM) for gait analysis. Ten healthy males (age, 30.10 ± 3.28 years; height, 175.90 ± 5.17 cm; weight: 82.80 ± 17.15 kg) participated in this study; they were asked to walk normally on a treadmill. Data were collected during walking at the self-selected speeds (preferred speed, 1.34 ± 0.10 m/s) using both Mocap and an IMU. Calibration was performed directly before each gait measurement to minimize the IMU drift error over time. The lower-extremity joint angles of the hip, knee, and ankle were calculated and compared with IMUs and Mocap; the hip-joint angle did not differ significantly between IMUs and Mocap. There were significant differences in the discrete (max, min, and range of motion) and continuous variables (waveform: 0–100%) of the knee and ankle joints between IMUs and Mocap, particularly on the swing phase (p < 0.05). Our results suggest that IMU-based data can be used confidently during the stance phase but needs evaluation regarding the swing phase in gait analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Valverde, Isabel Cavadas. "Dançando com motion capture: experimentações e deslumbramentos na expansão somático-tecnológica para corporealidades pós-humanas[Isabel Cavadas Valverde]." Repertório, no. 28 (December 5, 2017): 250. http://dx.doi.org/10.9771/r.v0i28.25009.

Full text
Abstract:
<p class="p1">Resumo:</p><p class="p2"><span class="s1">Neste artigo, relato, reflito e indago a pesquisa em Dança-tecnologia que venho desenvolvendo, com ênfase nos projetos experimentais com o Sistema de Captura de Movimento ou Motion Capture (Mocap). Inicialmente, durante a aprendizagem e primeiras experiências desse sistema de Virtualização Tridimensional (3D) do movimento humano, integrado na pesquisa doutoral teórica em interfaces dança-tecnologia, na Universidade da Califórnia (UCRiverside), 2000-2004, fui motivada pela vontade e necessidade de compreender através da prática suas potencialidades de aplicação estética-poética em obras por outros artistas, e também por querer experimentá-lo criativamente. Depois, no contexto da pesquisa pós-doutoral (onde fui bolsista Pós-Doc Fundação para a Ciência e a Tecnologia), desenvolvida no Institute of Humane Studies and Intelligent Sciences e no Grupo Visualization and Intelligent Multi Modal Interfaces/Instituto Nacional de Engenharia de Sistemas e Computadores-Investigação e Desenvolvimento/Instituto Superior Técnico/Universidade de Lisboa (2005-2008) e no Grupo de Agentes Inteligentes e Personagens Sintéticas da mesma instituição (2008-2011), assim como no Move Lab da Universidade Lusófona das Humanidades e Tecnologias,<span class="Apple-converted-space"> </span>explorei o Mocap em conjunto com outros sistemas em projetos experimentais próprios e colaborativos transdisciplinares, respetivamente, Reais Jogos Virtuais/Real Virtual Games e Lugares Sentidos/Senses Places. Aí, adotando uma abordagem de integração dos vários interesses de pesquisa, norteada progressivamente pela prática artística (dança-tecnologia) como pesquisa. Sobre a atividade mais recente como pesquisadora pós-doutoral, no Programa de Pós-Graduação em Dança da Universidade Federal da Bahia (UFBA), sob a supervisão da profa. dra. Lenira Peral Rangel – Bolsa CAPES/PNPD 2016/2017 –, partilho os aspetos cruciais no desenvolvimento dos projetos em curso Lugares Sentidos/Senses Places, Terreno de Toque/Touch Terrain, e Fado Dança, e no novo projeto Biblioteca de Dança Mocap. Aqui retomo, de forma mais aprofundada, o Mocap integrado no trabalho de pesquisa em dança somática-tecnológica no novo Laboratório Mocap da Escola de Dança. Assim, adequando-se ao setor Bastidores, este artigo focaliza as minhas experiências com o sistema Mocap nos principais vetores da pesquisa, em projetos realizados em diversos momentos ou períodos de trabalho e situação vivencial com uma abrangência de 15 anos.</span></p><p class="p3"><span class="s1">Palavras-chave:<span class="Apple-converted-space"> </span></span>Dança-tecnologia. Somática. Sistema de captura de movimento. Interação Humano-Máquina. Corporealidades pós-humanas. Interatividade. Transmedialidade. Prática como pesquisa.</p><p class="p3"> </p><p>DANCING WITH MOTION CAPTURE: CHALLENGING EXPERIMENTATION WITHIN SOMATIC-TECNOLOGICAL EXPANSION TOWARDS POSTHUMAN CORPOREALITIES</p><p class="p1"><em>Abstract:<span class="Apple-converted-space"> </span></em></p><p class="p2"><span class="s1"><em>In this article I reflect with the Dance-technology research that I have been developing, emphasizing the experimental projects with the Motion Capture System (Mocap). Initially, during the learning experiences of this system of tridimensional virtualization of human movement, integrated in the doctoral research in dance-technology interfaces at the University of California, Riverside (UCRiverside) (2000-2004), motivated to understand through practice its aesthetic-poetic application potentialities in artworks by various artists, but also wanting to experiment creatively. Then, in the context of the post-doctoral research (Post-doctoral Fellow Foundation for Science and Technology) developed at Institute of Humane Studies and Intelligent Sciences (IHSIS) and at Visualization and Intelligent Multi Modal Interfaces Group/National Institute of Systems Engineering and Computers-Research and Development/Technical Superior Institute/University of Lisbon (VIMMI/INESC-ID/IST/UL, 2005-2008) and at the Intelligent Agents and Synthetic Characters’ Group (GAIPS/INESC-ID/IST/UL, 2008-2011), as well as at MoveLab of Lusofona University of the Humanities and Technologies (ULHT), I explore the Mocap together with other interface systems in experimental trans-disciplinary collaborative projects, respectively, Real Virtual Games and Senses Places. Adopting an integrative approach of several research interests, progressively headed by the artistic practice as research. Presently, as post-doctoral researcher at the Postgraduate Dance Program of the Bahia Federal University (PPGDance/UFBA, CAPES/PNPD, 2016/2017), supervised by prof. dr. Lenira Peral Rangel, I share crucial aspects in the development of the ongoing projects Senses Places, Touch Terrain, and Fado Dance, and the new project Mocap Dance Library,<span class="Apple-converted-space"> </span>depth integrated in the somatic-technological dance research at the new Mocap Laboratory of the Dance School. Therefore, adequate to Bastidores, this article encompasses my experiences with the Mocap system in the main research vectors within 15 years.<span class="Apple-converted-space"> </span></em></span></p><p class="p3"><span class="s1"><em>Keywords:<span class="Apple-converted-space"> </span></em></span><em>Dance-technology. Somatics. Motion capture. Posthuman corporealities. Practice as research.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
18

Koike, Sekiya, and Shunsuke Tazawa. "Quantification of a Ball-Speed Generating Mechanism of Baseball Pitching Using IMUs." Proceedings 49, no. 1 (June 15, 2020): 57. http://dx.doi.org/10.3390/proceedings2020049057.

Full text
Abstract:
The purpose of this study was to propose a methodology which quantifies the ball-speed generating mechanism of baseball pitching with the use of inertial measurement units (IMUs). IMUs were attached to the upper trunk, upper arm, forearm, and hand segments. The initial orientation parameters of each segment were identified using the differential iteration method from the acceleration and angular velocity of the sensor coordinate system output by IMUs attached to each segment. The motion of each segment was calculated and the dynamic contributions were then calculated. The motion of a baseball pitcher, who was instructed to throw at the target, was measured with a motion capture (mocap) system and IMUs. The results show that quantitative analysis of the ball-speed generation mechanism by the proposed method is almost similar to that conducted by the mocap system. In the future, this method will be employed to evaluate the ball-speed generation mechanism outside controlled laboratory conditions in an effort to help understand and improve the player’s motion.
APA, Harvard, Vancouver, ISO, and other styles
19

Tak, Igor, Willem-Paul Wiertz, Maarten Barendrecht, and Rob Langhout. "Validity of a New 3-D Motion Analysis Tool for the Assessment of Knee, Hip and Spine Joint Angles during the Single Leg Squat." Sensors 20, no. 16 (August 13, 2020): 4539. http://dx.doi.org/10.3390/s20164539.

Full text
Abstract:
Aim: Study concurrent validity of a new sensor-based 3D motion capture (MoCap) tool to register knee, hip and spine joint angles during the single leg squat. Design: Cross-sectional. Setting: University laboratory. Participants: Forty-four physically active (Tegner ≥ 5) subjects (age 22.8 (±3.3)) Main outcome measures: Sagittal and frontal plane trunk, hip and knee angles at peak knee flexion. The sensor-based system consisted of 4 active (triaxial accelerometric, gyroscopic and geomagnetic) sensors wirelessly connected with an iPad. A conventional passive tracking 3D MoCap (OptiTrack) system served as gold standard. Results: All sagittal plane measurement correlations observed were very strong for the knee and hip (r = 0.929–0.988, p < 0.001). For sagittal plane spine assessment, the correlations were moderate (r = 0.708–0.728, p < 0.001). Frontal plane measurement correlations were moderate in size for the hip (ρ = 0.646–0.818, p < 0.001) and spine (ρ = 0.613–0.827, p < 0.001). Conclusions: The 3-D MoCap tool has good to excellent criterion validity for sagittal and frontal plane angles occurring in the knee, hip and spine during the single leg squat. This allows bringing this type of easily accessible MoCap technology outside laboratory settings.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, I.-Chen, and Ming Ouhyoung. "Mirror MoCap: Automatic and efficient capture of dense 3D facial motion parameters from video." Visual Computer 21, no. 6 (June 9, 2005): 355–72. http://dx.doi.org/10.1007/s00371-005-0291-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ofori, Ernest Kwesi, Shuaijie Wang, and Tanvi Bhatt. "Validity of Inertial Sensors for Assessing Balance Kinematics and Mobility during Treadmill-Based Perturbation and Dance Training." Sensors 21, no. 9 (April 28, 2021): 3065. http://dx.doi.org/10.3390/s21093065.

Full text
Abstract:
Inertial sensors (IS) enable the kinematic analysis of human motion with fewer logistical limitations than the silver standard optoelectronic motion capture (MOCAP) system. However, there are no data on the validity of IS for perturbation training and during the performance of dance. The aim of this present study was to determine the concurrent validity of IS in the analysis of kinematic data during slip and trip-like perturbations and during the performance of dance. Seven IS and the MOCAP system were simultaneously used to capture the reactive response and dance movements of fifteen healthy young participants (Age: 18–35 years). Bland Altman (BA) plots, root mean square errors (RMSE), Pearson’s correlation coefficients (R), and intraclass correlation coefficients (ICC) were used to compare kinematic variables of interest between the two systems for absolute equivalency and accuracy. Limits of agreements (LOA) of the BA plots ranged from −0.23 to 0.56 and −0.21 to 0.43 for slip and trip stability variables, respectively. The RMSE for slip and trip stabilities were from 0.11 to 0.20 and 0.11 to 0.16, respectively. For the joint mobility in dance, LOA varied from −6.98–18.54, while RMSE ranged from 1.90 to 13.06. Comparison of IS and optoelectronic MOCAP system for reactive balance and body segmental kinematics revealed that R varied from 0.59 to 0.81 and from 0.47 to 0.85 while ICC was from 0.50 to 0.72 and 0.45 to 0.84 respectively for slip–trip perturbations and dance. Results of moderate to high concurrent validity of IS and MOCAP systems. These results were consistent with results from similar studies. This suggests that IS are valid tools to quantitatively analyze reactive balance and mobility kinematics during slip–trip perturbation and the performance of dance at any location outside, including the laboratory, clinical and home settings.
APA, Harvard, Vancouver, ISO, and other styles
22

Valencia-Marin, Cristian Kaori, Juan Diego Pulgarin-Giraldo, Luisa Fernanda Velasquez-Martinez, Andres Marino Alvarez-Meza, and German Castellanos-Dominguez. "An Enhanced Joint Hilbert Embedding-Based Metric to Support Mocap Data Classification with Preserved Interpretability." Sensors 21, no. 13 (June 29, 2021): 4443. http://dx.doi.org/10.3390/s21134443.

Full text
Abstract:
Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class).
APA, Harvard, Vancouver, ISO, and other styles
23

Marín, Javier, Teresa Blanco, Juan de la Torre, and José J. Marín. "Gait Analysis in a Box: A System Based on Magnetometer-Free IMUs or Clusters of Optical Markers with Automatic Event Detection." Sensors 20, no. 12 (June 12, 2020): 3338. http://dx.doi.org/10.3390/s20123338.

Full text
Abstract:
Gait analysis based on full-body motion capture technology (MoCap) can be used in rehabilitation to aid in decision making during treatments or therapies. In order to promote the use of MoCap gait analysis based on inertial measurement units (IMUs) or optical technology, it is necessary to overcome certain limitations, such as the need for magnetically controlled environments, which affect IMU systems, or the need for additional instrumentation to detect gait events, which affects IMUs and optical systems. We present a MoCap gait analysis system called Move Human Sensors (MH), which incorporates proposals to overcome both limitations and can be configured via magnetometer-free IMUs (MH-IMU) or clusters of optical markers (MH-OPT). Using a test–retest reliability experiment with thirty-three healthy subjects (20 men and 13 women, 21.7 ± 2.9 years), we determined the reproducibility of both configurations. The assessment confirmed that the proposals performed adequately and allowed us to establish usage considerations. This study aims to enhance gait analysis in daily clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
24

Chatzitofis, Anargyros, Dimitrios Zarpalas, Stefanos Kollias, and Petros Daras. "DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors." Sensors 19, no. 2 (January 11, 2019): 282. http://dx.doi.org/10.3390/s19020282.

Full text
Abstract:
In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4 . 5 % in total 3D PCK accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

Delbridge, Matthew, and Riku Roihankorpi. "Intermedial Ontologies: Strategies of Preparedness, Research and Design in Real Time Performance Capture." Nordic Theatre Studies 26, no. 2 (September 9, 2014): 46. http://dx.doi.org/10.7146/nts.v26i2.24309.

Full text
Abstract:
The paper introduces and inspects core elements relative to the ‘live’ in performances that utilise real time Motion Capture (MoCap) systems and cognate/reactive virtual environments by drawing on interdisciplinary research conducted by Matthew Delbridge (University of Tasmania), and the collaborative live MoCap workshops carried out in projects DREX and VIMMA (2009-12 and 2013-14, University of Tampere). It also discusses strategies to revise manners of direction and performing, practical work processes, questions of production design and educational aspects peculiar to technological staging. Through the analysis of a series of performative experiments involving 3D real time virtual reality systems, projection mapping and reactive surfaces, new ways of interacting in/with performance have been identified. This poses a unique challenge to traditional approaches of learning about staging, dramaturgy, acting, dance and performance design in the academy, all of which are altered in a fundamental manner when real time virtual reality is introduced as a core element of the performative experience. Meanwhile, various analyses, descriptions and theorisations of technological performance have framed up-to-date policies on how to approach these questions more systematically. These have given rise to more sophisticated notions of preparedness of performing arts professionals, students and researchers to confront the potentials of new technologies and the forms of creativity and art they enable. The deployment of real time Motion Capture systems and co-present virtual environments in an educational setting comprise a peculiar but informative case of study for the above to be explored.
APA, Harvard, Vancouver, ISO, and other styles
26

Garimella, Raman, Thomas Peeters, Eduardo Parrilla, Jordi Uriel, Seppe Sels, Toon Huysmans, and Stijn Verwulgen. "Estimating Cycling Aerodynamic Performance Using Anthropometric Measures." Applied Sciences 10, no. 23 (December 2, 2020): 8635. http://dx.doi.org/10.3390/app10238635.

Full text
Abstract:
Aerodynamic drag force and projected frontal area (A) are commonly used indicators of aerodynamic cycling efficiency. This study investigated the accuracy of estimating these quantities using easy-to-acquire anthropometric and pose measures. In the first part, computational fluid dynamics (CFD) drag force calculations and A (m2) values from photogrammetry methods were compared using predicted 3D cycling models for 10 male amateur cyclists. The shape of the 3D models was predicted using anthropometric measures. Subsequently, the models were reposed from a standing to a cycling pose using joint angle data from an optical motion capture (mocap) system. In the second part, a linear regression analysis was performed to predict A using 26 anthropometric measures combined with joint angle data from two sources (optical and inertial mocap, separately). Drag calculations were strongly correlated with benchmark projected frontal area (coefficient of determination R2 = 0.72). A can accurately be predicted using anthropometric data and joint angles from optical mocap (root mean square error (RMSE) = 0.037 m2) or inertial mocap (RMSE = 0.032 m2). This study showed that aerodynamic efficiency can be predicted using anthropometric and joint angle data from commercially available, inexpensive posture tracking methods. The practical relevance for cyclists is to quantify and train posture during cycling for improving aerodynamic efficiency and hence performance.
APA, Harvard, Vancouver, ISO, and other styles
27

NOHARA, Ryuki, Yui ENDO, Mitsunori TADA, and Hiroshi TAKEMURA. "2A2-X02 Estimate contact region in grasp motion from individual hand model and motion capture data." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2015 (2015): _2A2—X02_1—_2A2—X02_3. http://dx.doi.org/10.1299/jsmermd.2015._2a2-x02_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kotsifaki, Argyro, Rodney Whiteley, and Clint Hansen. "Dual Kinect v2 system can capture lower limb kinematics reasonably well in a clinical setting: concurrent validity of a dual camera markerless motion capture system in professional football players." BMJ Open Sport & Exercise Medicine 4, no. 1 (December 2018): e000441. http://dx.doi.org/10.1136/bmjsem-2018-000441.

Full text
Abstract:
ObjectivesTo determine whether a dual-camera markerless motion capture system can be used for lower limb kinematic evaluation in athletes in a preseason screening setting.DesignDescriptive laboratory study.SettingLaboratory setting.ParticipantsThirty-four (n=34) healthy athletes.Main outcome measuresThree dimensional lower limb kinematics during three functional tests: Single Leg Squat (SLS), Single Leg Jump, Modified Counter-movement Jump. The tests were simultaneously recorded using both a marker-based motion capture system and two Kinect v2 cameras using iPi Mocap Studio software.ResultsExcellent agreement between systems for the flexion/extension range of motion of the shin during all tests and for the thigh abduction/adduction during SLS were seen. For peak angles, results showed excellent agreement for knee flexion. Poor correlation was seen for the rotation movements.ConclusionsThis study supports the use of dual Kinect v2 configuration with the iPi software as a valid tool for assessment of sagittal and frontal plane hip and knee kinematic parameters but not axial rotation in athletes.
APA, Harvard, Vancouver, ISO, and other styles
29

XIANG, JIAN, and ZHIJUN ZHENG. "DOUBLE-REFERENCE INDEX FOR MOTION RETRIEVAL BY ISOMAP DIMENSIONALITY REDUCTION." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 04 (June 2010): 601–18. http://dx.doi.org/10.1142/s0218001410008044.

Full text
Abstract:
Along with the development of the motion capture (mocap) technique, large-scale 3D motion databases have become increasingly available. In this paper, a novel approach is presented for motion retrieval based on double-reference index (DRI). Due to the high dimensionality of motion's features, Isomap nonlinear dimension reduction is used. In addition, an algorithmic framework is employed to approximate the optimal mapping function by a Radial Basis Function (RBF) in handling new data. Subsequently, a DRI is built based on selecting a small set of representative motion clips in the database. Thus, the candidate set is obtained by discarding the most unrelated motion clips to significantly reduce the number of costly similarity measures. Finally, experimental results show that these approaches are effective for motion data retrieval in large-scale databases.
APA, Harvard, Vancouver, ISO, and other styles
30

Azevedo-Coste, Christine, Roger Pissard-Gibollet, Gaelle Toupet, Éric Fleury, Jean-Christophe Lucet, and Gabriel Birgand. "Tracking Clinical Staff Behaviors in an Operating Room." Sensors 19, no. 10 (May 17, 2019): 2287. http://dx.doi.org/10.3390/s19102287.

Full text
Abstract:
Inadequate staff behaviors in an operating room (OR) may lead to environmental contamination and increase the risk of surgical site infection. In order to assess this statement objectively, we have developed an approach to analyze OR staff behaviors using a motion tracking system. The present article introduces a solution for the assessment of individual displacements in the OR by: (1) detecting human presence and quantifying movements using a motion capture (MOCAP) system and (2) observing doors’ movements by means of a wireless network of inertial sensors fixed on the doors and synchronized with the MOCAP system. The system was used in eight health care facilities sites during 30 cardiac and orthopedic surgery interventions. A total of 119 h of data were recorded and analyzed. Three hundred thirty four individual displacements were reconstructed. On average, only 10.6% individual positions could not be reconstructed and were considered undetermined, i.e., the presence in the room of the corresponding staff member could not be determined. The article presents the hardware and software developed together with the obtained reconstruction performances.
APA, Harvard, Vancouver, ISO, and other styles
31

Maruyama, Tsubasa, Satoshi Kanai, Hiroaki Date, and Mitsunori Tada. "Motion-capture-based walking simulation of digital human adapted to laser-scanned 3D as-is environments for accessibility evaluation." Journal of Computational Design and Engineering 3, no. 3 (March 25, 2016): 250–65. http://dx.doi.org/10.1016/j.jcde.2016.03.001.

Full text
Abstract:
Abstract Owing to our rapidly aging society, accessibility evaluation to enhance the ease and safety of access to indoor and outdoor environments for the elderly and disabled is increasing in importance. Accessibility must be assessed not only from the general standard aspect but also in terms of physical and cognitive friendliness for users of different ages, genders, and abilities. Meanwhile, human behavior simulation has been progressing in the areas of crowd behavior analysis and emergency evacuation planning. However, in human behavior simulation, environment models represent only “as-planned” situations. In addition, a pedestrian model cannot generate the detailed articulated movements of various people of different ages and genders in the simulation. Therefore, the final goal of this research was to develop a virtual accessibility evaluation by combining realistic human behavior simulation using a digital human model (DHM) with “as-is” environment models. To achieve this goal, we developed an algorithm for generating human-like DHM walking motions, adapting its strides, turning angles, and footprints to laser-scanned 3D as-is environments including slopes and stairs. The DHM motion was generated based only on a motion-capture (MoCap) data for flat walking. Our implementation constructed as-is 3D environment models from laser-scanned point clouds of real environments and enabled a DHM to walk autonomously in various environment models. The difference in joint angles between the DHM and MoCap data was evaluated. Demonstrations of our environment modeling and walking simulation in indoor and outdoor environments including corridors, slopes, and stairs are illustrated in this study. Highlights An adaptive walking simulation algorithm of the digital human was developed. The environment models are automatically generated from laser-scanned point clouds. A digital human can walk autonomously in various as-built environment models. Simulated walking motion of the digital human is similar to one of real human. Elapsed time of modeling and simulation is short enough for practical application.
APA, Harvard, Vancouver, ISO, and other styles
32

Marin, Javier, Jose J. Marin, Teresa Blanco, Juan de la Torre, Inmaculada Salcedo, and Elena Martitegui. "Is My Patient Improving? Individualized Gait Analysis in Rehabilitation." Applied Sciences 10, no. 23 (November 29, 2020): 8558. http://dx.doi.org/10.3390/app10238558.

Full text
Abstract:
In the rehabilitation field, clinicians are continually struggling to assess improvements in patients following interventions. In this paper, we propose an approach to use gait analysis based on inertial motion capture (MoCap) to monitor individuals during rehabilitation. Gait is a cyclical movement that generates a sufficiently large data sample in each capture session to statistically compare two different sessions from a single patient. Using this crucial idea, 21 heterogeneous patients with hemiplegic spasticity were assessed using gait analysis before and after receiving treatment with botulinum toxin injections. Afterwards, the two sessions for each patient were compared using the magnitude-based decision statistical method. Due to the challenge of classifying changes in gait variables such as improvements or impairments, assessing each patient’s progress required an interpretative process. After completing this process, we determined that 10 patients showed overall improvement, five patients showed overall impairment, and six patients did not show any overall change. Finally, the interpretation process was summarized by developing guidelines to aid in future assessments. In this manner, our approach provides graphical information about the patients’ progress to assess improvement following intervention and to support decision-making. This research contributes to integrating MoCap-based gait analysis into rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
33

Ma, Rong Fei. "Generating Real-Time Responsive Balance Recovery Animation." Advanced Materials Research 219-220 (March 2011): 391–95. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.391.

Full text
Abstract:
We propose a novel Biomechanics-based Responsive Balance Recovery (BRBR) technique for synthesizing realistic balance recovery animations. First, our BRBR technique is based on a simplified human biomechanical model of keeping balance, so as to interactively respond to contact forces in the environment. Then, we employ the Principal Component Analysis (PCA) to reduce the dimensions of the mocap (motion capture) database to ensure the search for the most qualified return-to segment in real-time. Finally, empirical results from three cases validate the approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Hanneton, Sylvain, Svetlana Dedobbeler, Thomas Hoellinger, and Agnès Roby-Brami. "Direct Kinematic Modeling of the Upper Limb During Trunk-Assisted Reaching." Journal of Applied Biomechanics 27, no. 3 (August 2011): 272–77. http://dx.doi.org/10.1123/jab.27.3.272.

Full text
Abstract:
The study proposes a rigid-body biomechanical model of the trunk and whole upper limb including scapula and the test of this model with a kinematic method using a six-dimensional (6-D) electromagnetic motion capture (mocap) device. Large unconstrained natural trunk-assisted reaching movements were recorded in 7 healthy subjects. The 3-D positions of anatomical landmarks were measured and then compared to their estimation given by the biomechanical chain fed with joint angles (the direct kinematics). Thus, the prediction errors was attributed to the different joints and to the different simplifications introduced in the model. Large (approx. 4 cm) end-point prediction errors at the level of the hand were reduced (to approx. 2 cm) if translations of the scapula were taken into account. As a whole, the 6-D mocap seems to give accurate results, except for pronosupination. The direct kinematic model could be used as a virtual mannequin for other applications, such as computer animation or clinical and ergonomical evaluations.
APA, Harvard, Vancouver, ISO, and other styles
35

Wei, Xiaopeng, Xiaoyong Fang, Qiang Zhang, and Dongsheng Zhou. "3D point pattern matching based on spatial geometric flexibility." Computer Science and Information Systems 7, no. 1 (2010): 231–46. http://dx.doi.org/10.2298/csis1001231w.

Full text
Abstract:
We propose a new method for matching two 3D point sets of identical cardinality with global similarity but local non-rigid deformations and distribution errors. This problem arises from marker based optical motion capture (Mocap) systems for facial Mocap data. To establish one-to-one identifications, we introduce a forward 3D point pattern matching (PPM) method based on spatial geometric flexibility, which considers a non-rigid deformation between the two point-sets. First, a model normalization algorithm based on simple rules is presented to normalize the two point-sets into a fixed space. Second, a facial topological structure model is constructed, which is used to preserve spatial information for each FP. Finally, we introduce a Local Deformation Matrix (LDM) to rectify local searching vector to meet the local deformation. Experimental results confirm that this method is applicable for robust 3D point pattern matching of sparse point sets with underlying non-rigid deformation and similar distribution.
APA, Harvard, Vancouver, ISO, and other styles
36

Partarakis, Nikolaos, Xenophon Zabulis, Antonis Chatziantoniou, Nikolaos Patsiouras, and Ilia Adami. "An Approach to the Creation and Presentation of Reference Gesture Datasets, for the Preservation of Traditional Crafts." Applied Sciences 10, no. 20 (October 19, 2020): 7325. http://dx.doi.org/10.3390/app10207325.

Full text
Abstract:
A wide spectrum of digital data are becoming available to researchers and industries interested in the recording, documentation, recognition, and reproduction of human activities. In this work, we propose an approach for understanding and articulating human motion recordings into multimodal datasets and VR demonstrations of actions and activities relevant to traditional crafts. To implement the proposed approach, we introduce Animation Studio (AnimIO) that enables visualisation, editing, and semantic annotation of pertinent data. AnimIO is compatible with recordings acquired by Motion Capture (MoCap) and Computer Vision. Using AnimIO, the operator can isolate segments from multiple synchronous recordings and export them in multimodal animation files. AnimIO can be used to isolate motion segments that refer to individual craft actions, as described by practitioners. The proposed approach has been iteratively designed for use by non-experts in the domain of 3D motion digitisation.
APA, Harvard, Vancouver, ISO, and other styles
37

Tsai, Chi Ming, Jian Ji Huang, Tsair Rong Chen, Siang Jhih Cin, and Ching Mu Chen. "Analysis and Implementation for the RPG Boxing Game." Applied Mechanics and Materials 851 (August 2016): 595–98. http://dx.doi.org/10.4028/www.scientific.net/amm.851.595.

Full text
Abstract:
The analysis and implementation of this paper is applied to a real game boxing machine. The contents of this game boxing machine are consists of three parts that are the UNITY, programming and hardware development. Therefore, in order to have RPG role-play boxing game inside of real game boxing machine, this paper firstly analyzes user requirement and then implement the combined-use in the traditional large boxing machines that will let the players to face the need considering how to fight the contents of RPG role-play boxing game. Moreover, in order to save more time to make 3D animations, motion capture equipment is used called MOCAP to generate characteristics, hero, monsters, and beautiful girls. All body movements of characteristics inside of game boxing machine captured by MOCAP are tied to the body of the head, limbs and torso. Furthermore, those body movements will PhaseSpace to our servers via the Internet passing out after the signal processing MotionBuilder software for recording and output operation. The output file can be transformed into the skeleton and action graphic model and using MAYA software can also do the same way. All procedures in this RPG role-play boxing game design have been successfully interacted both machine side and play side with fun and game new life, and to integrate the actual situation of the game for the future.
APA, Harvard, Vancouver, ISO, and other styles
38

JOÃO, FILIPA, ANTÓNIO VELOSO, SANDRA AMADO, PAULO ARMADA-DA-SILVA, and ANA C. MAURÍCIO. "CAN GLOBAL OPTIMIZATION TECHNIQUE COMPENSATE FOR MARKER SKIN MOVEMENT IN RAT KINEMATICS?" Journal of Mechanics in Medicine and Biology 14, no. 05 (August 2014): 1450065. http://dx.doi.org/10.1142/s0219519414500651.

Full text
Abstract:
The motion of the skeletal estimated from skin attached marker-based motion capture(MOCAP) systems is known to be affected by significant bias caused by anatomical landmarks mislocation but especially by soft tissue artifacts (such as skin deformation and sliding, inertial effects and muscle contraction). As a consequence, the error associated with this bias can propagate to joint kinematics and kinetics data, particularly in small rodents. The purpose of this study was to perform a segmental kinematic analysis of the rat hindlimb during locomotion, using both global optimization as well as segmental optimization methods. Eight rats were evaluated for natural overground walking and motion of the right hindlimb was captured with an optoeletronic system while the animals walked in the track. Three-dimensional (3D) biomechanical analyses were carried out and hip, knee and ankle joint angular displacements and velocities were calculated. Comparison between both methods demonstrated that the magnitude of the kinematic error due to skin movement increases in the segmental optimization when compared with the global optimization method. The kinematic results assessed with the global optimization method matches more closely to the joint angles and ranges of motion calculated from bone-derived kinematics, being the knee and hip joints with more significant differences.
APA, Harvard, Vancouver, ISO, and other styles
39

Mustaffa, Norsimaa, and Muhammad Zaffwan Idris. "Analysing Step Patterns on the Malaysian Folk Dance Zapin Lenga." Journal of Computational and Theoretical Nanoscience 17, no. 2 (February 1, 2020): 1503–10. http://dx.doi.org/10.1166/jctn.2020.8832.

Full text
Abstract:
Zapin Lenga is a Malaysian folk dance within Zapin Melayu and specifically known as Zapin Johor. Zapin Lenga was originated in Muar, one of the district in Johor state and the oldest Zapin Melayu repertoire that had been found in Johor. Zapin Lenga which usually performed for religious significance comprises of six-step patterns, known as ‘langkah’ that were performed only by the males during the olden days. In the meaning of preservation, we have used Motion Capture (MoCap) technology in recording and digitising human motions. In this paper, we present a primary work in preserving the dance movements of Zapin Lenga through Laban Movement Analysis (LMA) including Body, Effort, Shape and Space in order to identify qualities on dance movements. The dance movements were analysed by considering musical rhythm using segmentation method and the distances between markers were measured to define speed.
APA, Harvard, Vancouver, ISO, and other styles
40

Behboodi, Ahad, Nicole Zahradka, Henry Wright, James Alesi, and Samuel C. K. Lee. "Real-Time Detection of Seven Phases of Gait in Children with Cerebral Palsy Using Two Gyroscopes." Sensors 19, no. 11 (June 1, 2019): 2517. http://dx.doi.org/10.3390/s19112517.

Full text
Abstract:
A recently designed gait phase detection (GPD) system, with the ability to detect all seven phases of gait in healthy adults, was modified for GPD in children with cerebral palsy (CP). A shank-attached gyroscope sent angular velocity to a rule-based algorithm in LabVIEW to identify the distinct characteristics of the signal. Seven typically developing children (TD) and five children with CP were asked to walk on treadmill at their self-selected speed while using this system. Using only shank angular velocity, all seven phases of gait (Loading Response, Mid-Stance, Terminal Stance, Pre-Swing, Initial Swing, Mid-Swing and Terminal Swing) were reliably detected in real time. System performance was validated against two established GPD methods: (1) force-sensing resistors (GPD-FSR) (for typically developing children) and (2) motion capture (GPD-MoCap) (for both typically developing children and children with CP). The system detected over 99% of the phases identified by GPD-FSR and GPD-MoCap. Absolute values of average gait phase onset detection deviations relative to GPD-MoCap were less than 100 ms for both TD children and children with CP. The newly designed system, with minimized sensor setup and low processing burden, is cosmetic and economical, making it a viable solution for real-time stand-alone and portable applications such as triggering functional electrical stimulation (FES) in rehabilitation systems. This paper verifies the applicability of the GPD system to identify specific gait events for triggering FES to enhance gait in children with CP.
APA, Harvard, Vancouver, ISO, and other styles
41

Cortés, Camilo, Luis Unzueta, Ana de los Reyes-Guzmán, Oscar E. Ruiz, and Julián Flórez. "Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles." Applied Bionics and Biomechanics 2016 (2016): 1–20. http://dx.doi.org/10.1155/2016/5058171.

Full text
Abstract:
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.
APA, Harvard, Vancouver, ISO, and other styles
42

Millán, Michelle, and Hiram Cantú. "Wearable device for automatic detection and monitoring of freezing in Parkinson’s disease." SHS Web of Conferences 77 (2020): 05001. http://dx.doi.org/10.1051/shsconf/20207705001.

Full text
Abstract:
Freezing of gait (FOG) in Parkinson’s disease (PD) is described as a short-term episode of absence or considerable decrease of movement despite the intention of moving forward. FOG is related to risk of falls and low quality of life for individuals with PD. FOG has been studied and analyzed through different techniques, including inertial movement units (IMUs) and motion capture systems (MOCAP), both along with robust algorithms. Still, there is not a standardized methodology to identify nor quantify freezing episodes (FEs). In a previous work from our group, a new methodology was developed to differentiate FEs from normal movement using position data obtained from a motion capture system. The purpose of this study is to determine if this methodology is equally effective identifying FEs when using IMUs. Twenty subjects with PD will perform two different gait-related tasks. Trials will be tracked by IMUs and filmed by a video camera; data from IMUs will be compared to the time occurrence of FEs obtained from the videos. We expect this methodology will successfully detect FEs with IMUs’ data. Results would allow the development of a wearable device able to detect and monitor FOG. It is expected that the use of this type of devices would allow clinicians to better understand FOG and improve patients’ care.
APA, Harvard, Vancouver, ISO, and other styles
43

Yasin, Hashim, Mazhar Hussain, and Andreas Weber. "Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network." Sensors 20, no. 8 (April 15, 2020): 2226. http://dx.doi.org/10.3390/s20082226.

Full text
Abstract:
In this paper, we propose a novel and efficient framework for 3D action recognition using a deep learning architecture. First, we develop a 3D normalized pose space that consists of only 3D normalized poses, which are generated by discarding translation and orientation information. From these poses, we extract joint features and employ them further in a Deep Neural Network (DNN) in order to learn the action model. The architecture of our DNN consists of two hidden layers with the sigmoid activation function and an output layer with the softmax function. Furthermore, we propose a keyframe extraction methodology through which, from a motion sequence of 3D frames, we efficiently extract the keyframes that contribute substantially to the performance of the action. In this way, we eliminate redundant frames and reduce the length of the motion. More precisely, we ultimately summarize the motion sequence, while preserving the original motion semantics. We only consider the remaining essential informative frames in the process of action recognition, and the proposed pipeline is sufficiently fast and robust as a result. Finally, we evaluate our proposed framework intensively on publicly available benchmark Motion Capture (MoCap) datasets, namely HDM05 and CMU. From our experiments, we reveal that our proposed scheme significantly outperforms other state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
44

Cortés, Camilo, Ana de los Reyes-Guzmán, Davide Scorza, Álvaro Bertelsen, Eduardo Carrasco, Ángel Gil-Agudo, Oscar Ruiz-Salguero, and Julián Flórez. "Inverse Kinematics for Upper Limb Compound Movement Estimation in Exoskeleton-Assisted Rehabilitation." BioMed Research International 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/2581924.

Full text
Abstract:
Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury). The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement. The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton. This approximation is rough since their kinematic structures differ. Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup. Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains. EIKPE has been tested with single DOF movements of the wrist and elbow joints. This paper presents the assessment of EIKPE with elbow-shoulder compound movements (i.e., object prehension). Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage). The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compound movement execution, especially for the shoulder joint angles. This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types.
APA, Harvard, Vancouver, ISO, and other styles
45

Potter, Michael V., Stephen M. Cain, Lauro V. Ojeda, Reed D. Gurchiek, Ryan S. McGinnis, and Noel C. Perkins. "Error-state Kalman filter for lower-limb kinematic estimation: Evaluation on a 3-body model." PLOS ONE 16, no. 4 (April 20, 2021): e0249577. http://dx.doi.org/10.1371/journal.pone.0249577.

Full text
Abstract:
Human lower-limb kinematic measurements are critical for many applications including gait analysis, enhancing athletic performance, reducing or monitoring injury risk, augmenting warfighter performance, and monitoring elderly fall risk, among others. We present a new method to estimate lower-limb kinematics using an error-state Kalman filter that utilizes an array of body-worn inertial measurement units (IMUs) and four kinematic constraints. We evaluate the method on a simplified 3-body model of the lower limbs (pelvis and two legs) during walking using data from simulation and experiment. Evaluation on this 3-body model permits direct evaluation of the ErKF method without several confounding error sources from human subjects (e.g., soft tissue artefacts and determination of anatomical frames). RMS differences for the three estimated hip joint angles all remain below 0.2 degrees compared to simulation and 1.4 degrees compared to experimental optical motion capture (MOCAP). RMS differences for stride length and step width remain within 1% and 4%, respectively compared to simulation and 7% and 5%, respectively compared to experiment (MOCAP). The results are particularly important because they foretell future success in advancing this approach to more complex models for human movement. In particular, our future work aims to extend this approach to a 7-body model of the human lower limbs composed of the pelvis, thighs, shanks, and feet.
APA, Harvard, Vancouver, ISO, and other styles
46

Yasin, Hashim, and Björn Krüger. "An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks." Sensors 21, no. 7 (April 1, 2021): 2415. http://dx.doi.org/10.3390/s21072415.

Full text
Abstract:
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image, we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation, orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a variety of virtual cameras. With this approach, we not only transform 3D pose space to the normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose in a definite feature space made up of specific joint sets. These retrieved poses are then used to construct a weak perspective camera and a final 3D posture under the camera model that minimizes the reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We also show that the proposed system yields competitive, convincing results in comparison to other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Zaltieri, Martina, Carlo Massaroni, Daniela Lo Presti, Marco Bravi, Riccardo Sabbadini, Sandra Miccinilli, Silvia Sterzi, Domenico Formica, and Emiliano Schena. "A Wearable Device Based on a Fiber Bragg Grating Sensor for Low Back Movements Monitoring." Sensors 20, no. 14 (July 9, 2020): 3825. http://dx.doi.org/10.3390/s20143825.

Full text
Abstract:
Low back pain (LBP) is one of the musculoskeletal disorders that most affects workers. Among others, one of the working categories which mainly experiences such disease are video terminal workers. As it causes exploitation of the National Health Service and absenteeism in workplaces, LBP constitutes a relevant socio-economic burden. In such a scenario, a prompt detection of wrong seating postures can be useful to prevent the occurrence of this disorder. To date, many tools capable of monitoring the spinal range of motions (ROMs) are marketed, but most of them are unusable in working environments due to their bulkiness, discomfort and invasiveness. In the last decades, fiber optic sensors have made their mark allowing the creation of light and compact wearable systems. In this study, a novel wearable device embedding a Fiber Bragg Grating sensor for the detection of lumbar flexion-extensions (F/E) in seated subjects is proposed. At first, the manufacturing process of the sensing element was shown together with its mechanical characterization, that shows linear response to strain with a high correlation coefficient (R2 > 0.99) and a sensitivity value (Sε) of 0.20 nm∙mε−1. Then, the capability of the wearable device in measuring F/E in the sagittal body plane was experimentally assessed on a small population of volunteers, using a Motion Capture system (MoCap) as gold standard showing good ability of the system to match the lumbar F/E trend in time. Additionally, the lumbar ROMs were evaluated in terms of intervertebral lumbar distances (Δ d L 3 − L 1 ) and angles, exhibiting moderate to good agreement with the MoCap outputs (the maximum Mean Absolute Error obtained is ~16% in detecting Δ d L 3 − L 1 ). The proposed wearable device is the first attempt for the development of FBG-based wearable systems for workers’ safety monitoring.
APA, Harvard, Vancouver, ISO, and other styles
48

He, Yuan, Xinyu Li, Runlong Li, Jianping Wang, and Xiaojun Jing. "A Deep-Learning Method for Radar Micro-Doppler Spectrogram Restoration." Sensors 20, no. 17 (September 3, 2020): 5007. http://dx.doi.org/10.3390/s20175007.

Full text
Abstract:
Radio frequency interference, which makes it difficult to produce high-quality radar spectrograms, is a major issue for micro-Doppler-based human activity recognition (HAR). In this paper, we propose a deep-learning-based method to detect and cut out the interference in spectrograms. Then, we restore the spectrograms in the cut-out region. First, a fully convolutional neural network (FCN) is employed to detect and remove the interference. Then, a coarse-to-fine generative adversarial network (GAN) is proposed to restore the part of the spectrogram that is affected by the interferences. The simulated motion capture (MOCAP) spectrograms and the measured radar spectrograms with interference are used to verify the proposed method. Experimental results from both qualitative and quantitative perspectives show that the proposed method can mitigate the interference and restore high-quality radar spectrograms. Furthermore, the comparison experiments also demonstrate the efficiency of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Freire, Sérgio, Geise Santos, Augusto Armondes, Eduardo A. L. Meneses, and Marcelo M. Wanderley. "Evaluation of Inertial Sensor Data by a Comparison with Optical Motion Capture Data of Guitar Strumming Gestures." Sensors 20, no. 19 (October 8, 2020): 5722. http://dx.doi.org/10.3390/s20195722.

Full text
Abstract:
Computing technologies have opened up a myriad of possibilities for expanding the sonic capabilities of acoustic musical instruments. Musicians nowadays employ a variety of rather inexpensive, wireless sensor-based systems to obtain refined control of interactive musical performances in actual musical situations like live music concerts. It is essential though to clearly understand the capabilities and limitations of such acquisition systems and their potential influence on high-level control of musical processes. In this study, we evaluate one such system composed of an inertial sensor (MetaMotionR) and a hexaphonic nylon guitar for capturing strumming gestures. To characterize this system, we compared it with a high-end commercial motion capture system (Qualisys) typically used in the controlled environments of research laboratories, in two complementary tasks: comparisons of rotational and translational data. For the rotations, we were able to compare our results with those that are found in the literature, obtaining RMSE below 10° for 88% of the curves. The translations were compared in two ways: by double derivation of positional data from the mocap and by double integration of IMU acceleration data. For the task of estimating displacements from acceleration data, we developed a compensative-integration method to deal with the oscillatory character of the strumming, whose approximative results are very dependent on the type of gestures and segmentation; a value of 0.77 was obtained for the average of the normalized covariance coefficients of the displacement magnitudes. Although not in the ideal range, these results point to a clearly acceptable trade-off between the flexibility, portability and low cost of the proposed system when compared to the limited use and cost of the high-end motion capture standard in interactive music setups.
APA, Harvard, Vancouver, ISO, and other styles
50

Radin Umar, Radin Zaid, Muhammad Naqiuddin Khafiz, Nazreen Abdullasim, Fatin Ayuni Mohd Azli Lee, and Nadiah Ahmad. "THE EFFECT OF TRANSFER DISTANCE TO LOWER BACK TWISTING AND BENDING PATTERNS IN MANUAL TRANSFER TASK." Jurnal Teknologi 83, no. 2 (February 2, 2021): 125–33. http://dx.doi.org/10.11113/jurnalteknologi.v83.14559.

Full text
Abstract:
Manual material handling (MMH) activities utilize human’s effort with minimal aid from mechanical devices. MMH is typically associated with poor lower back posture which can lead to lower back injury. The likelihood to develop musculoskeletal disorders (MSDs) increases when poor working posture exist in combination with repetition and/or forceful exertion. In manual transfer activity, the distance between lifting origin and destination could affect workers’ exposure on poor lower back working posture. An experimental study was conducted to investigate the effect of transfer distance to lower back twisting and bending pattern in manual transfer activity. Positional body joints data of 26 male subjects were captured using the combination of motion capture (MOCAP) system with MVN studio software. Calculated data were plotted against time to track subjects’ lower back twisting and bending behavior. In general, longer the transfer distance would result in smaller twisting angle but higher bending angle. Statistical analysis in this study suggests 0.75m to 1.00m as the optimum transfer distance to balance lower back twisting and bending exposure on workers. This study is envisioned to provide insights for practitioners to consider space requirements for MMH activity to minimize lower back twisting and bending, and consequently the development of MSDs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography