To see the other types of publications on this topic, follow the link: DeepLabCut.

Journal articles on the topic 'DeepLabCut'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 journal articles for your research on the topic 'DeepLabCut.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Laurence-Chasen, J. D., Armita R. Manafzadeh, Nicholas G. Hatsopoulos, Callum F. Ross, and Fritzie I. Arce-McShane. "Integrating XMALab and DeepLabCut for high-throughput XROMM." Journal of Experimental Biology 223, no. 17 (2020): jeb226720. http://dx.doi.org/10.1242/jeb.226720.

Full text
Abstract:
ABSTRACTMarker tracking is a major bottleneck in studies involving X-ray reconstruction of moving morphology (XROMM). Here, we tested whether DeepLabCut, a new deep learning package built for markerless tracking, could be applied to videoradiographic data to improve data processing throughput. Our novel workflow integrates XMALab, the existing XROMM marker tracking software, and DeepLabCut while retaining each program's utility. XMALab is used for generating training datasets, error correction and 3D reconstruction, whereas the majority of marker tracking is transferred to DeepLabCut for automatic batch processing. In the two case studies that involved an in vivo behavior, our workflow achieved a 6 to 13-fold increase in data throughput. In the third case study, which involved an acyclic, post-mortem manipulation, DeepLabCut struggled to generalize to the range of novel poses and did not surpass the throughput of XMALab alone. Deployed in the proper context, this new workflow facilitates large scale XROMM studies that were previously precluded by software constraints.
APA, Harvard, Vancouver, ISO, and other styles
2

Lauer, Jessy, Mu Zhou, Shaokai Ye, et al. "Multi-animal pose estimation, identification and tracking with DeepLabCut." Nature Methods 19, no. 4 (2022): 496–504. http://dx.doi.org/10.1038/s41592-022-01443-0.

Full text
Abstract:
AbstractEstimating the pose of multiple animals is a challenging computer vision problem: frequent interactions cause occlusions and complicate the association of detected keypoints to the correct individuals, as well as having highly similar looking animals that interact more closely than in typical multi-human scenarios. To take up this challenge, we build on DeepLabCut, an open-source pose estimation toolbox, and provide high-performance animal assembly and tracking—features required for multi-animal scenarios. Furthermore, we integrate the ability to predict an animal’s identity to assist tracking (in case of occlusions). We illustrate the power of this framework with four datasets varying in complexity, which we release to serve as a benchmark for future algorithm development.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferres, Kim, Timo Schloesser, and Peter A. Gloor. "Predicting Dog Emotions Based on Posture Analysis Using DeepLabCut." Future Internet 14, no. 4 (2022): 97. http://dx.doi.org/10.3390/fi14040097.

Full text
Abstract:
This paper describes an emotion recognition system for dogs automatically identifying the emotions anger, fear, happiness, and relaxation. It is based on a previously trained machine learning model, which uses automatic pose estimation to differentiate emotional states of canines. Towards that goal, we have compiled a picture library with full body dog pictures featuring 400 images with 100 samples each for the states “Anger”, “Fear”, “Happiness” and “Relaxation”. A new dog keypoint detection model was built using the framework DeepLabCut for animal keypoint detector training. The newly trained detector learned from a total of 13,809 annotated dog images and possesses the capability to estimate the coordinates of 24 different dog body part keypoints. Our application is able to determine a dog’s emotional state visually with an accuracy between 60% and 70%, exceeding human capability to recognize dog emotions.
APA, Harvard, Vancouver, ISO, and other styles
4

Sehara, Keisuke, Paul Zimmer-Harwood, Matthew E. Larkum, and Robert N. S. Sachdev. "Real-Time Closed-Loop Feedback in Behavioral Time Scales Using DeepLabCut." eneuro 8, no. 2 (2021): ENEURO.0415–20.2021. http://dx.doi.org/10.1523/eneuro.0415-20.2021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nath, Tanmay, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, and Mackenzie Weygandt Mathis. "Using DeepLabCut for 3D markerless pose estimation across species and behaviors." Nature Protocols 14, no. 7 (2019): 2152–76. http://dx.doi.org/10.1038/s41596-019-0176-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Habe, Hitoshi, Yoshiki Takeuchi, Kei Terayama, and Masa-aki Sakagami. "Pose Estimation of Swimming Fish Using NACA Airfoil Model for Collective Behavior Analysis." Journal of Robotics and Mechatronics 33, no. 3 (2021): 547–55. http://dx.doi.org/10.20965/jrm.2021.p0547.

Full text
Abstract:
We propose a pose estimation method using a National Advisory Committee for Aeronautics (NACA) airfoil model for fish schools. This method allows one to understand the state in which fish are swimming based on their posture and dynamic variations. Moreover, their collective behavior can be understood based on their posture changes. Therefore, fish pose is a crucial indicator for collective behavior analysis. We use the NACA model to represent the fish posture; this enables more accurate tracking and movement prediction owing to the capability of the model in describing posture dynamics. To fit the model to video data, we first adopt the DeepLabCut toolbox to detect body parts (i.e., head, center, and tail fin) in an image sequence. Subsequently, we apply a particle filter to fit a set of parameters from the NACA model. The results from DeepLabCut, i.e., three points on a fish body, are used to adjust the components of the state vector. This enables more reliable estimation results to be obtained when the speed and direction of the fish change abruptly. Experimental results using both simulation data and real video data demonstrate that the proposed method provides good results, including when rapid changes occur in the swimming direction.
APA, Harvard, Vancouver, ISO, and other styles
7

Wrench, Alan, and Jonathan Balch-Tomes. "Beyond the Edge: Markerless Pose Estimation of Speech Articulators from Ultrasound and Camera Images Using DeepLabCut." Sensors 22, no. 3 (2022): 1133. http://dx.doi.org/10.3390/s22031133.

Full text
Abstract:
Automatic feature extraction from images of speech articulators is currently achieved by detecting edges. Here, we investigate the use of pose estimation deep neural nets with transfer learning to perform markerless estimation of speech articulator keypoints using only a few hundred hand-labelled images as training input. Midsagittal ultrasound images of the tongue, jaw, and hyoid and camera images of the lips were hand-labelled with keypoints, trained using DeepLabCut and evaluated on unseen speakers and systems. Tongue surface contours interpolated from estimated and hand-labelled keypoints produced an average mean sum of distances (MSD) of 0.93, s.d. 0.46 mm, compared with 0.96, s.d. 0.39 mm, for two human labellers, and 2.3, s.d. 1.5 mm, for the best performing edge detection algorithm. A pilot set of simultaneous electromagnetic articulography (EMA) and ultrasound recordings demonstrated partial correlation among three physical sensor positions and the corresponding estimated keypoints and requires further investigation. The accuracy of the estimating lip aperture from a camera video was high, with a mean MSD of 0.70, s.d. 0.56 mm compared with 0.57, s.d. 0.48 mm for two human labellers. DeepLabCut was found to be a fast, accurate and fully automatic method of providing unique kinematic data for tongue, hyoid, jaw, and lips.
APA, Harvard, Vancouver, ISO, and other styles
8

Mathis, Alexander, Pranav Mamidanna, Kevin M. Cury, et al. "DeepLabCut: markerless pose estimation of user-defined body parts with deep learning." Nature Neuroscience 21, no. 9 (2018): 1281–89. http://dx.doi.org/10.1038/s41593-018-0209-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Ruiqing, Juncai Zhu, and Xiaoping Rao. "Murine Motion Behavior Recognition Based on DeepLabCut and Convolutional Long Short-Term Memory Network." Symmetry 14, no. 7 (2022): 1340. http://dx.doi.org/10.3390/sym14071340.

Full text
Abstract:
Murine behavior recognition is widely used in biology, neuroscience, pharmacology, and other aspects of research, and provides a basis for judging the psychological and physiological state of mice. To solve the problem whereby traditional behavior recognition methods only model behavioral changes in mice over time or space, we propose a symmetrical algorithm that can capture spatiotemporal information based on behavioral changes. The algorithm first uses the improved DeepLabCut keypoint detection algorithm to locate the nose, left ear, right ear, and tail root of the mouse, and then uses the ConvLSTM network to extract spatiotemporal information from the keypoint feature map sequence to classify five behaviors of mice: walking straight, resting, grooming, standing upright, and turning. We developed a murine keypoint detection and behavior recognition dataset, and experiments showed that the method achieved a percentage of correct keypoints (PCK) of 87±1% at three scales and against four backgrounds, while the classification accuracy for the five kinds of behaviors reached 93±1%. The proposed method is thus accurate for keypoint detection and behavior recognition, and is a useful tool for murine motion behavior recognition.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhan, Wei, Yafeng Zou, Zhangzhang He, and Zhiliang Zhang. "Key Points Tracking and Grooming Behavior Recognition of Bactrocera minax (Diptera: Trypetidae) via DeepLabCut." Mathematical Problems in Engineering 2021 (August 2, 2021): 1–15. http://dx.doi.org/10.1155/2021/1392362.

Full text
Abstract:
Statistical analysis of Bactrocera grooming behavior is important for pest control and human health. Based on DeepLabCut, this study proposes a noninvasive and effective method to track the key points of Bactrocera minax and to detect and analyze its grooming behavior. The results are analyzed and calculated automatically by a computer program. Traditional movement tracking methods are invasive; for instance, the use of artificial pheromone may affect the behavior of Bactrocera minax, thus directly affecting the accuracy and reliability of experimental results. Traditional research studies mainly rely on manual work for behavior analysis and statistics. Researchers need to play the video frame by frame and record the time interval of each grooming behavior manually, which is time-consuming, laborious, and inaccurate. So the advantages of automated analysis are obvious. Using the method proposed in this paper, the image data of 94538 frames from 5 adult Bactrocera were analyzed and 14 key points were tracked. The overall tracking accuracy was as high as 96.7%. In the behavior analysis and statistics, the average accuracy rate of the five grooming behavior was all above 96%, and the accuracy rate of the remaining two grooming behavior was over 87%. The experimental results show that the automatic noninvasive method designed in this paper can track many key points of Bactrocera minax with high accuracy and ensure the accuracy of insect behavior recognition and analysis, which greatly reduces the manual observation time and provides a new method for key points tracking and behavior recognition of related insects.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Jia, Feilong Kang, Yongan Zhang, Yanqiu Liu, and Xia Yu. "Research on Tracking and Identification of Typical Protective Behavior of Cows Based on DeepLabCut." Applied Sciences 13, no. 2 (2023): 1141. http://dx.doi.org/10.3390/app13021141.

Full text
Abstract:
In recent years, traditional farming methods have been increasingly replaced by more modern, intelligent farming techniques. This shift towards information and intelligence in farming is becoming a trend. When they are bitten by dinoflagellates, cows display stress behaviors, including tail wagging, head tossing, leg kicking, ear flapping, and skin fluttering. The study of cow protective behavior can indirectly reveal the health status of cows and their living patterns under different environmental conditions, allowing for the evaluation of the breeding environment and animal welfare status. In this study, we generated key point feature marker information using the DeepLabCut target detection algorithm and constructed the spatial relationship of cow feature marker points to detect the cow’s protective behavior based on the change in key elements of the cow’s head swinging and walking performance. The algorithm can detect the protective behavior of cows, with the detection accuracy reaching the level of manual detection. The next step in the research focuses on analyzing the differences in protective behaviors of cows in different environments, which can help in cow breed selection. It is an important guide for diagnosing the health status of cows and improving milk production in a practical setting.
APA, Harvard, Vancouver, ISO, and other styles
12

Kosourikhina, Veronika, Diarmuid Kavanagh, Michael J. Richardson, and David M. Kaplan. "Validation of deep learning-based markerless 3D pose estimation." PLOS ONE 17, no. 10 (2022): e0276258. http://dx.doi.org/10.1371/journal.pone.0276258.

Full text
Abstract:
Deep learning-based approaches to markerless 3D pose estimation are being adopted by researchers in psychology and neuroscience at an unprecedented rate. Yet many of these tools remain unvalidated. Here, we report on the validation of one increasingly popular tool (DeepLabCut) against simultaneous measurements obtained from a reference measurement system (Fastrak) with well-known performance characteristics. Our results confirm close (mm range) agreement between the two, indicating that under specific circumstances deep learning-based approaches can match more traditional motion tracking methods. Although more work needs to be done to determine their specific performance characteristics and limitations, this study should help build confidence within the research community using these new tools.
APA, Harvard, Vancouver, ISO, and other styles
13

Suryanto, Michael Edbert, Ferry Saputra, Kevin Adi Kurnia, et al. "Using DeepLabCut as a Real-Time and Markerless Tool for Cardiac Physiology Assessment in Zebrafish." Biology 11, no. 8 (2022): 1243. http://dx.doi.org/10.3390/biology11081243.

Full text
Abstract:
DeepLabCut (DLC) is a deep learning-based tool initially invented for markerless pose estimation in mammals. In this study, we explored the possibility of adopting this tool for conducting markerless cardiac physiology assessment in an important aquatic toxicology model of zebrafish (Danio rerio). Initially, high-definition videography was applied to capture heartbeat information at a frame rate of 30 frames per second (fps). Next, 20 videos from different individuals were used to perform convolutional neural network training by labeling the heart chamber (ventricle) with eight landmarks. Using Residual Network (ResNet) 152, a neural network with 152 convolutional neural network layers with 500,000 iterations, we successfully obtained a trained model that can track the heart chamber in a real-time manner. Later, we validated DLC performance with the previously published ImageJ Time Series Analysis (TSA) and Kymograph (KYM) methods. We also evaluated DLC performance by challenging experimental animals with ethanol and ponatinib to induce cardiac abnormality and heartbeat irregularity. The results showed that DLC is more accurate than the TSA method in several parameters tested. The DLC-trained model also detected the ventricle of zebrafish embryos even in the occurrence of heart abnormalities, such as pericardial edema. We believe that this tool is beneficial for research studies, especially for cardiac physiology assessment in zebrafish embryos.
APA, Harvard, Vancouver, ISO, and other styles
14

Yu, Rim, and Yongsoon Choi. "OkeyDoggy3D: A Mobile Application for Recognizing Stress-Related Behaviors in Companion Dogs Based on Three-Dimensional Pose Estimation through Deep Learning." Applied Sciences 12, no. 16 (2022): 8057. http://dx.doi.org/10.3390/app12168057.

Full text
Abstract:
Dogs often express their stress through physical motions that can be recognized by their owners. We propose a mobile application that analyzes companion dog’s behavior and their three-dimensional poses via deep learning. As existing research on pose estimation has focused on humans, obtaining a large dataset comprising images showing animal joint locations is a challenge. Nevertheless, we generated such a dataset and used it to train an AI model. Furthermore, we analyzed circling behavior, which is associated with stress in companion dogs. To this end, we used the VideoPose3D model to estimate the 3D poses of companion dogs from the 2D pose estimation technique derived by the DeepLabCut model and developed a mobile app that provides analytical information on the stress-related behaviors, as well as the walking and isolation times, of companion dogs. Finally, we interviewed five certified experts to evaluate the validity and applicability of the app.
APA, Harvard, Vancouver, ISO, and other styles
15

Whiteway, Matthew R., Dan Biderman, Yoni Friedman, et al. "Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders." PLOS Computational Biology 17, no. 9 (2021): e1009439. http://dx.doi.org/10.1371/journal.pcbi.1009439.

Full text
Abstract:
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
APA, Harvard, Vancouver, ISO, and other styles
16

Liao, Pu, and Liu Guixiong. "Pressure vessel-oriented visual inspection method based on deep learning." PLOS ONE 17, no. 5 (2022): e0267743. http://dx.doi.org/10.1371/journal.pone.0267743.

Full text
Abstract:
The detection of surface parameters of pressure vessel welds guarantees safe operation. To address the problems of low efficiency and poor accuracy of traditional manual inspection methods, a method for welding morphological parameters combined with vision and structured light is proposed in this study. First, a feature point extraction algorithm for weld parameters based on deep convolution was proposed. An accurate extraction method of weld image feature point coordinates was designed based on the combination of the loss function via seam undercut feature recognition and weld feature point extraction network structure. Second, a training data enhancement method based on the third-order non-uniform rational B-spline (NURBS) curve was proposed to reduce the amount of data collection for training. Finally, a pressure vessel measurement device was designed, and the feature point extraction performance of the deep network and common feature point extraction networks, DeepLabCut and HR-net, proposed in this study were compared to analyze the theoretical accuracy of the surface parameter measurement. The results indicated that the theoretical accuracy of the parameter measurements was within 0.065 mm.
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Chengqi, Han Zhou, Jing Cao, et al. "Behavior Trajectory Tracking of Piglets Based on DLC-KPCA." Agriculture 11, no. 9 (2021): 843. http://dx.doi.org/10.3390/agriculture11090843.

Full text
Abstract:
Tracking the behavior trajectories in pigs in group is becoming increasingly important for welfare feeding. A novel method was proposed in this study to accurately track individual trajectories of pigs in group and analyze their behavior characteristics. First, a multi-pig trajectory tracking model was established based on DeepLabCut (DLC) to realize the daily trajectory tracking of piglets. Second, a high-dimensional spatiotemporal feature model was established based on kernel principal component analysis (KPCA) to achieve nonlinear trajectory optimal clustering. At the same time, the abnormal trajectory correction model was established from five dimensions (semantic, space, angle, time, and velocity) to avoid trajectory loss and drift. Finally, the thermal map of the track distribution was established to analyze the four activity areas of the piggery (resting, drinking, excretion, and feeding areas). Experimental results show that the trajectory tracking accuracy of our method reaches 96.88%, the tracking speed is 350 fps, and the loss value is 0.002. Thus, the method based on DLC–KPCA can meet the requirements of identification of piggery area and tracking of piglets’ behavior. This study is helpful for automatic monitoring of animal behavior and provides data support for breeding.
APA, Harvard, Vancouver, ISO, and other styles
18

Mundorf, Annakarina, Hiroshi Matsui, Sebastian Ocklenburg, and Nadja Freund. "Analyzing Turning Behavior after Repeated Lithium, Ketamine, or NaCl Injection and Chronic Stress Exposure in Mice." Symmetry 14, no. 11 (2022): 2352. http://dx.doi.org/10.3390/sym14112352.

Full text
Abstract:
A single chronic stress is often considered a potential reinforcer in psychiatric disorders. Lithium and ketamine both seem to ameliorate the consequences of stress. Here, male mice were either injected with lithium carbonate (LiCl), ketamine hydrochloride (KET), or sodium chloride (NaCl; controls) over nine consecutive days. Treatment was followed by 2 h of restraint stress over the first seven days. On the 9th day, 2h after injection, all animals were tested in the open field, and novel object tests and behavior were analyzed using the toolbox ‘DeepLabCut’. To exclude an effect of generally altered locomotion activity on turning behavior, further parameters were assessed. Treatment before chronic stress exposure did not influence the total number of turns, nor the direction of turning behavior in the open field and the novel object test. Additionally, general locomotion did not differ. However, mice treated with LiCl showed a stronger turning bias (i.e., larger absolute lateralization quotients) in the novel object test when compared to mice treated with KET. This study underlines the potential of investigating turning behavior as a sensitive and reliable marker of stress reaction. Additionally, analyzing behavioral asymmetries in the context of psychopharmacological treatment can render new insights.
APA, Harvard, Vancouver, ISO, and other styles
19

Vonstad, Elise Klæbo, Xiaomeng Su, Beatrix Vereijken, Kerstin Bach, and Jan Harald Nilsen. "Comparison of a Deep Learning-Based Pose Estimation System to Marker-Based and Kinect Systems in Exergaming for Balance Training." Sensors 20, no. 23 (2020): 6940. http://dx.doi.org/10.3390/s20236940.

Full text
Abstract:
Using standard digital cameras in combination with deep learning (DL) for pose estimation is promising for the in-home and independent use of exercise games (exergames). We need to investigate to what extent such DL-based systems can provide satisfying accuracy on exergame relevant measures. Our study assesses temporal variation (i.e., variability) in body segment lengths, while using a Deep Learning image processing tool (DeepLabCut, DLC) on two-dimensional (2D) video. This variability is then compared with a gold-standard, marker-based three-dimensional Motion Capturing system (3DMoCap, Qualisys AB), and a 3D RGB-depth camera system (Kinect V2, Microsoft Inc). Simultaneous data were collected from all three systems, while participants (N = 12) played a custom balance training exergame. The pose estimation DLC-model is pre-trained on a large-scale dataset (ImageNet) and optimized with context-specific pose annotated images. Wilcoxon’s signed-rank test was performed in order to assess the statistical significance of the differences in variability between systems. The results showed that the DLC method performs comparably to the Kinect and, in some segments, even to the 3DMoCap gold standard system with regard to variability. These results are promising for making exergames more accessible and easier to use, thereby increasing their availability for in-home exercise.
APA, Harvard, Vancouver, ISO, and other styles
20

Johnson, Caleb D., Jereme Outerleys, and Irene S. Davis. "Agreement Between Sagittal Foot and Tibia Angles During Running Derived From an Open-Source Markerless Motion Capture Platform and Manual Digitization." Journal of Applied Biomechanics 38, no. 2 (2022): 111–16. http://dx.doi.org/10.1123/jab.2021-0323.

Full text
Abstract:
Several open-source platforms for markerless motion capture offer the ability to track 2-dimensional (2D) kinematics using simple digital video cameras. We sought to establish the performance of one of these platforms, DeepLabCut. Eighty-four runners who had sagittal plane videos recorded of their left lower leg were included in the study. Data from 50 participants were used to train a deep neural network for 2D pose estimation of the foot and tibia segments. The trained model was used to process novel videos from 34 participants for continuous 2D coordinate data. Overall network accuracy was assessed using the train/test errors. Foot and tibia angles were calculated for 7 strides using manual digitization and markerless methods. Agreement was assessed with mean absolute differences and intraclass correlation coefficients. Bland–Altman plots and paired t tests were used to assess systematic bias. The train/test errors for the trained network were 2.87/7.79 pixels, respectively (0.5/1.2 cm). Compared to manual digitization, the markerless method was found to systematically overestimate foot angles and underestimate tibial angles (P < .01, d = 0.06–0.26). However, excellent agreement was found between the segment calculation methods, with mean differences ≤1° and intraclass correlation coefficients ≥.90. Overall, these results demonstrate that open-source, markerless methods are a promising new tool for analyzing human motion.
APA, Harvard, Vancouver, ISO, and other styles
21

Sturman, Oliver, Lukas von Ziegler, Christa Schläppi, et al. "Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions." Neuropsychopharmacology 45, no. 11 (2020): 1942–52. http://dx.doi.org/10.1038/s41386-020-0776-y.

Full text
Abstract:
Abstract To study brain function, preclinical research heavily relies on animal monitoring and the subsequent analyses of behavior. Commercial platforms have enabled semi high-throughput behavioral analyses by automating animal tracking, yet they poorly recognize ethologically relevant behaviors and lack the flexibility to be employed in variable testing environments. Critical advances based on deep-learning and machine vision over the last couple of years now enable markerless tracking of individual body parts of freely moving rodents with high precision. Here, we compare the performance of commercially available platforms (EthoVision XT14, Noldus; TSE Multi-Conditioning System, TSE Systems) to cross-verified human annotation. We provide a set of videos—carefully annotated by several human raters—of three widely used behavioral tests (open field test, elevated plus maze, forced swim test). Using these data, we then deployed the pose estimation software DeepLabCut to extract skeletal mouse representations. Using simple post-analyses, we were able to track animals based on their skeletal representation in a range of classic behavioral tests at similar or greater accuracy than commercial behavioral tracking systems. We then developed supervised machine learning classifiers that integrate the skeletal representation with the manual annotations. This new combined approach allows us to score ethologically relevant behaviors with similar accuracy to humans, the current gold standard, while outperforming commercial solutions. Finally, we show that the resulting machine learning approach eliminates variation both within and between human annotators. In summary, our approach helps to improve the quality and accuracy of behavioral data, while outperforming commercial systems at a fraction of the cost.
APA, Harvard, Vancouver, ISO, and other styles
22

Gillette, Ross, Michelle Dias, Michael P. Reilly, et al. "Two Hits of EDCs Three Generations Apart: Effects on Social Behaviors in Rats, and Analysis by Machine Learning." Toxics 10, no. 1 (2022): 30. http://dx.doi.org/10.3390/toxics10010030.

Full text
Abstract:
All individuals are directly exposed to extant environmental endocrine-disrupting chemicals (EDCs), and indirectly exposed through transgenerational inheritance from our ancestors. Although direct and ancestral exposures can each lead to deficits in behaviors, their interactions are not known. Here we focused on social behaviors based on evidence of their vulnerability to direct or ancestral exposures, together with their importance in reproduction and survival of a species. Using a novel “two hits, three generations apart” experimental rat model, we investigated interactions of two classes of EDCs across six generations. PCBs (a weakly estrogenic mixture Aroclor 1221, 1 mg/kg), Vinclozolin (antiandrogenic, 1 mg/kg) or vehicle (6% DMSO in sesame oil) were administered to pregnant rat dams (F0) to directly expose the F1 generation, with subsequent breeding through paternal or maternal lines. A second EDC hit was given to F3 dams, thereby exposing the F4 generation, with breeding through the F6 generation. Approximately 1200 male and female rats from F1, F3, F4 and F6 generations were run through tests of sociability and social novelty as indices of social preference. We leveraged machine learning using DeepLabCut to analyze nuanced social behaviors such as nose touching with accuracy similar to a human scorer. Surprisingly, social behaviors were affected in ancestrally exposed but not directly exposed individuals, particularly females from a paternally exposed breeding lineage. Effects varied by EDC: Vinclozolin affected aspects of behavior in the F3 generation while PCBs affected both the F3 and F6 generations. Taken together, our data suggest that specific aspects of behavior are particularly vulnerable to heritable ancestral exposure of EDC contamination, that there are sex differences, and that lineage is a key factor in transgenerational outcomes.
APA, Harvard, Vancouver, ISO, and other styles
23

Baker, Sunderland, Anand Tekriwal, Gidon Felsen, et al. "Automatic extraction of upper-limb kinematic activity using deep learning-based markerless tracking during deep brain stimulation implantation for Parkinson’s disease: A proof of concept study." PLOS ONE 17, no. 10 (2022): e0275490. http://dx.doi.org/10.1371/journal.pone.0275490.

Full text
Abstract:
Optimal placement of deep brain stimulation (DBS) therapy for treating movement disorders routinely relies on intraoperative motor testing for target determination. However, in current practice, motor testing relies on subjective interpretation and correlation of motor and neural information. Recent advances in computer vision could improve assessment accuracy. We describe our application of deep learning-based computer vision to conduct markerless tracking for measuring motor behaviors of patients undergoing DBS surgery for the treatment of Parkinson’s disease. Video recordings were acquired during intraoperative kinematic testing (N = 5 patients), as part of standard of care for accurate implantation of the DBS electrode. Kinematic data were extracted from videos post-hoc using the Python-based computer vision suite DeepLabCut. Both manual and automated (80.00% accuracy) approaches were used to extract kinematic episodes from threshold derived kinematic fluctuations. Active motor epochs were compressed by modeling upper limb deflections with a parabolic fit. A semi-supervised classification model, support vector machine (SVM), trained on the parameters defined by the parabolic fit reliably predicted movement type. Across all cases, tracking was well calibrated (i.e., reprojection pixel errors 0.016–0.041; accuracies >95%). SVM predicted classification demonstrated high accuracy (85.70%) including for two common upper limb movements, arm chain pulls (92.30%) and hand clenches (76.20%), with accuracy validated using a leave-one-out process for each patient. These results demonstrate successful capture and categorization of motor behaviors critical for assessing the optimal brain target for DBS surgery. Conventional motor testing procedures have proven informative and contributory to targeting but have largely remained subjective and inaccessible to non-Western and rural DBS centers with limited resources. This approach could automate the process and improve accuracy for neuro-motor mapping, to improve surgical targeting, optimize DBS therapy, provide accessible avenues for neuro-motor mapping and DBS implantation, and advance our understanding of the function of different brain areas.
APA, Harvard, Vancouver, ISO, and other styles
24

Hardin, Abigail, and Ingo Schlupp. "Using machine learning and DeepLabCut in animal behavior." acta ethologica, July 16, 2022. http://dx.doi.org/10.1007/s10211-022-00397-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kirkpatrick, Nathan J., Robert J. Butera, and Young-Hui Chang. "DeepLabCut increases markerless tracking efficiency in X-Ray video analysis of rodent locomotion." Journal of Experimental Biology, August 11, 2022. http://dx.doi.org/10.1242/jeb.244540.

Full text
Abstract:
Despite the prevalence of rat models to study human disease and injury, existing methods for quantifying behavior through skeletal movements are problematic due to skin movement inaccuracies associated with optical video analysis; or, require invasive implanted markers or time consuming manual rotoscoping for x-ray video approaches. We examined the use of a machine learning tool, DeepLabCut, to perform automated, markerless tracking in bi-planar x-ray videos of locomoting rats. Models were trained on 590 pairs of video frames to identify 19 unique skeletal landmarks of the pelvic limb. Accuracy, precision, and time savings were assessed. Machine-identified landmarks deviated from manually labeled counterparts by 2.4±0.2 mm (n=1,710 landmarks). DeepLabCut decreased analysis time by over three orders of magnitude (1,627x) compared to manual labeling. Distribution of these models may enable the processing of a large volume of accurate x-ray kinematics locomotion data in a fraction of the time without requiring surgically implanted markers.
APA, Harvard, Vancouver, ISO, and other styles
26

Josserand, Mathilde, Orsola Rosa-Salva, Elisabetta Versace, and Bastien S. Lemaire. "Visual Field Analysis: A reliable method to score left and right eye use using automated tracking." Behavior Research Methods, October 8, 2021. http://dx.doi.org/10.3758/s13428-021-01702-6.

Full text
Abstract:
AbstractBrain and behavioural asymmetries have been documented in various taxa. Many of these asymmetries involve preferential left and right eye use. However, measuring eye use through manual frame-by-frame analyses from video recordings is laborious and may lead to biases. Recent progress in technology has allowed the development of accurate tracking techniques for measuring animal behaviour. Amongst these techniques, DeepLabCut, a Python-based tracking toolbox using transfer learning with deep neural networks, offers the possibility to track different body parts with unprecedented accuracy. Exploiting the potentialities of DeepLabCut, we developed Visual Field Analysis, an additional open-source application for extracting eye use data. To our knowledge, this is the first application that can automatically quantify left–right preferences in eye use. Here we test the performance of our application in measuring preferential eye use in young domestic chicks. The comparison with manual scoring methods revealed a near perfect correlation in the measures of eye use obtained by Visual Field Analysis. With our application, eye use can be analysed reliably, objectively and at a fine scale in different experimental paradigms.
APA, Harvard, Vancouver, ISO, and other styles
27

Clemensson, Erik K. H., Morteza Abbaszadeh, Silvia Fanni, Elena Espa, and M. Angela Cenci. "Tracking Rats in Operant Conditioning Chambers Using a Versatile Homemade Video Camera and DeepLabCut." Journal of Visualized Experiments, no. 160 (June 15, 2020). http://dx.doi.org/10.3791/61409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Solby, Hannah, Mia Radovanovic, and Jessica A. Sommerville. "A New Look at Infant Problem-Solving: Using DeepLabCut to Investigate Exploratory Problem-Solving Approaches." Frontiers in Psychology 12 (November 8, 2021). http://dx.doi.org/10.3389/fpsyg.2021.705108.

Full text
Abstract:
When confronted with novel problems, problem-solvers must decide whether to copy a modeled solution or to explore their own unique solutions. While past work has established that infants can learn to solve problems both through their own exploration and through imitation, little work has explored the factors that influence which of these approaches infants select to solve a given problem. Moreover, past work has treated imitation and exploration as qualitatively distinct, although these two possibilities may exist along a continuum. Here, we apply a program novel to developmental psychology (DeepLabCut) to archival data (Lucca et al., 2020) to investigate the influence of the effort and success of an adult’s modeled solution, and infants’ firsthand experience with failure, on infants’ imitative versus exploratory problem-solving approaches. Our results reveal that tendencies toward exploration are relatively immune to the information from the adult model, but that exploration generally increased in response to firsthand experience with failure. In addition, we found that increases in maximum force and decreases in trying time were associated with greater exploration, and that exploration subsequently predicted problem-solving success on a new iteration of the task. Thus, our results demonstrate that infants increase exploration in response to failure and that exploration may operate in a larger motivational framework with force, trying time, and expectations of task success.
APA, Harvard, Vancouver, ISO, and other styles
29

Miyama, Kazuki, Ryoma Bise, Satoshi Ikemura, et al. "Deep learning-based automatic-bone-destruction-evaluation system using contextual information from other joints." Arthritis Research & Therapy 24, no. 1 (2022). http://dx.doi.org/10.1186/s13075-022-02914-7.

Full text
Abstract:
Abstract Background X-ray images are commonly used to assess the bone destruction of rheumatoid arthritis. The purpose of this study is to propose an automatic-bone-destruction-evaluation system fully utilizing deep neural networks (DNN). This system detects all target joints of the modified Sharp/van der Heijde score (SHS) from a hand X-ray image. It then classifies every target joint as intact (SHS = 0) or non-intact (SHS ≥ 1). Methods We used 226 hand X-ray images of 40 rheumatoid arthritis patients. As for detection, we used a DNN model called DeepLabCut. As for classification, we built four classification models that classify the detected joint as intact or non-intact. The first model classifies each joint independently, whereas the second model does it while comparing the same contralateral joint. The third model compares the same joint group (e.g., the proximal interphalangeal joints) of one hand and the fourth model compares the same joint group of both hands. We evaluated DeepLabCut’s detection performance and classification models’ performances. The classification models’ performances were compared to three orthopedic surgeons. Results Detection rates for all the target joints were 98.0% and 97.3% for erosion and joint space narrowing (JSN). Among the four classification models, the model that compares the same contralateral joint showed the best F-measure (0.70, 0.81) and area under the curve of the precision-recall curve (PR-AUC) (0.73, 0.85) regarding erosion and JSN. As for erosion, the F-measure and PR-AUC of this model were better than the best of the orthopedic surgeons. Conclusions The proposed system was useful. All the target joints were detected with high accuracy. The classification model that compared the same contralateral joint showed better performance than the orthopedic surgeons regarding erosion.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Jinxin, Paniz Karbasi, Liqiang Wang, and Julian P. Meeks. "A layered, hybrid machine learning analytic workflow for mouse risk assessment behavior." eneuro, December 23, 2022, ENEURO.0335–22.2022. http://dx.doi.org/10.1523/eneuro.0335-22.2022.

Full text
Abstract:
Accurate and efficient quantification of animal behavior facilitates the understanding of the brain. An emerging approach within machine learning (ML) field is to combine multiple ML-based algorithms to quantify animal behavior. These so-called hybrid models have emerged because of limitations associated with supervised (e.g., random forest, RF) and unsupervised (e.g., hidden Markov model, HMM) ML models. For example, RF models lack temporal information across video frames, and HMM latent states are often difficult to interpret. We sought to develop a hybrid model, and did so in the context of a study of mouse risk assessment behavior. We utilized DeepLabCut to estimate the positions of mouse body parts. Positional features were calculated using DeepLabCut outputs and were used to train RF and HMM models with equal number of states, separately. The per-frame predictions from RF and HMM models were then passed to a second HMM model layer ("reHMM"). The outputs of the reHMM layer showed improved interpretability over the initial HMM output. Finally, we combined predictions from RF and HMM models with selected positional features to train a third HMM model ("reHMM+"). This reHMM+ layered hybrid model unveiled distinctive temporal and human-interpretable behavioral patterns. We applied this workflow to investigate risk assessment to trimethylthiazoline and snake feces odor, finding unique behavioral patterns to each that were separable from attractive and neutral stimuli. We conclude that this layered, hybrid ML workflow represents a balanced approach for improving the depth and reliability of ML classifiers in chemosensory and other behavioral contexts.Significance StatementIn this study, we integrate two widely-adopted machine learning (ML) models, random forest and hidden Markov model, to develop a layered, hybrid ML-based workflow. Our workflow not only overcomes the intrinsic limitations of each model alone, but also improves the depth and reliability of ML models. Implementing this analytic workflow unveils distinctive and dynamic mouse behavioral patterns to chemosensory cues in the context of mouse risk assessment behavioral experiments. This study provides an efficient and interpretable analytic strategy for the quantification of animal behavior in diverse experimental settings.
APA, Harvard, Vancouver, ISO, and other styles
31

Zdarsky, Niklas, Stefan Treue, and Moein Esghaei. "A Deep Learning-Based Approach to Video-Based Eye Tracking for Human Psychophysics." Frontiers in Human Neuroscience 15 (July 21, 2021). http://dx.doi.org/10.3389/fnhum.2021.685830.

Full text
Abstract:
Real-time gaze tracking provides crucial input to psychophysics studies and neuromarketing applications. Many of the modern eye-tracking solutions are expensive mainly due to the high-end processing hardware specialized for processing infrared-camera pictures. Here, we introduce a deep learning-based approach which uses the video frames of low-cost web cameras. Using DeepLabCut (DLC), an open-source toolbox for extracting points of interest from videos, we obtained facial landmarks critical to gaze location and estimated the point of gaze on a computer screen via a shallow neural network. Tested for three extreme poses, this architecture reached a median error of about one degree of visual angle. Our results contribute to the growing field of deep-learning approaches to eye-tracking, laying the foundation for further investigation by researchers in psychophysics or neuromarketing.
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Woo Seok, M. Ibrahim Khot, Hyun-Myung Woo, et al. "AI-enabled, implantable, multichannel wireless telemetry for photodynamic therapy." Nature Communications 13, no. 1 (2022). http://dx.doi.org/10.1038/s41467-022-29878-1.

Full text
Abstract:
AbstractPhotodynamic therapy (PDT) offers several advantages for treating cancers, but its efficacy is highly dependent on light delivery to activate a photosensitizer. Advances in wireless technologies enable remote delivery of light to tumors, but suffer from key limitations, including low levels of tissue penetration and photosensitizer activation. Here, we introduce DeepLabCut (DLC)-informed low-power wireless telemetry with an integrated thermal/light simulation platform that overcomes the above constraints. The simulator produces an optimized combination of wavelengths and light sources, and DLC-assisted wireless telemetry uses the parameters from the simulator to enable adequate illumination of tumors through high-throughput (<20 mice) and multi-wavelength operation. Together, they establish a range of guidelines for effective PDT regimen design. In vivo Hypericin and Foscan mediated PDT, using cancer xenograft models, demonstrates substantial suppression of tumor growth, warranting further investigation in research and/or clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
33

Moore, Dalton D., Jeffrey D. Walker, Jason N. MacLean, and Nicholas G. Hatsopoulos. "Validating marker-less pose estimation with 3D x-ray radiography." Journal of Experimental Biology, April 25, 2022. http://dx.doi.org/10.1242/jeb.243998.

Full text
Abstract:
To reveal the neurophysiological underpinnings of natural movement, neural recordings must be paired with accurate tracking of limbs and postures. Here we evaluate the accuracy of DeepLabCut (DLC), a deep learning marker-less motion capture approach, by comparing it to a 3D x-ray video radiography system that tracks markers placed under the skin (XROMM). We record behavioral data simultaneously with XROMM and RGB video as marmosets forage and reconstruct three-dimensional kinematics in a common coordinate system. We use Anipose to filter and triangulate DLC trajectories of 11 markers on the forelimb and torso and find a low median error (0.228 cm) between the two modalities corresponding to 2.0% of the range of motion. For studies allowing this relatively small error, DLC and similar marker-less pose estimation tools enable the study of increasingly naturalistic behaviors in many fields including non-human primate motor control.
APA, Harvard, Vancouver, ISO, and other styles
34

Winters, Carmen, Wim Gorssen, Victoria A. Ossorio-Salazar, Simon Nilsson, Sam Golden, and Rudi D’Hooge. "Automated procedure to assess pup retrieval in laboratory mice." Scientific Reports 12, no. 1 (2022). http://dx.doi.org/10.1038/s41598-022-05641-w.

Full text
Abstract:
AbstractAll mammalian mothers form some sort of caring bond with their infants that is crucial to the development of their offspring. The Pup Retrieval Test (PRT) is the leading procedure to assess pup-directed maternal care in laboratory rodents, used in a wide range of basic and preclinical research applications. Most PRT protocols require manual scoring, which is prone to bias and spatial and temporal inaccuracies. This study proposes a novel procedure using machine learning algorithms to enable reliable assessment of PRT performance. Automated tracking of a dam and one pup was established in DeepLabCut and was combined with automated behavioral classification of “maternal approach”, “carrying” and “digging” in Simple Behavioral Analysis (SimBA). Our automated procedure estimated retrieval success with an accuracy of 86.7%, whereas accuracies of “approach”, “carry” and “digging” were estimated at respectively 99.3%, 98.6% and 85.0%. We provide an open-source, step-by-step protocol for automated PRT assessment, which aims to increase reproducibility and reliability, and can be easily shared and distributed.
APA, Harvard, Vancouver, ISO, and other styles
35

Newton, Kyle C., Dovi Kacev, Simon R. O. Nilsson, Allison L. Saettele, Sam A. Golden, and Lavinia Sheets. "Lateral line ablation by ototoxic compounds results in distinct rheotaxis profiles in larval zebrafish." Communications Biology 6, no. 1 (2023). http://dx.doi.org/10.1038/s42003-023-04449-2.

Full text
Abstract:
AbstractThe zebrafish lateral line is an established model for hair cell organ damage, yet few studies link mechanistic disruptions to changes in biologically relevant behavior. We used larval zebrafish to determine how damage via ototoxic compounds impact rheotaxis. Larvae were treated with CuSO4 or neomycin to disrupt lateral line function then exposed to water flow stimuli. Their swimming behavior was recorded on video then DeepLabCut and SimBA software were used to track movements and classify rheotaxis behavior, respectively. Lateral line-disrupted fish performed rheotaxis, but they swam greater distances, for shorter durations, and with greater angular variance than controls. Furthermore, spectral decomposition analyses confirmed that lesioned fish exhibited ototoxic compound-specific behavioral profiles with distinct changes in the magnitude, frequency, and cross-correlation between fluctuations in linear and angular movements. Our observations demonstrate that lateral line input is needed for fish to hold their station in flow efficiently and reveals that commonly used lesion methods have unique effects on rheotaxis behavior.
APA, Harvard, Vancouver, ISO, and other styles
36

Arvin, Simon, Rune Nguyen Rasmussen, and Keisuke Yonehara. "EyeLoop: An Open-Source System for High-Speed, Closed-Loop Eye-Tracking." Frontiers in Cellular Neuroscience 15 (December 9, 2021). http://dx.doi.org/10.3389/fncel.2021.779628.

Full text
Abstract:
Eye-trackers are widely used to study nervous system dynamics and neuropathology. Despite this broad utility, eye-tracking remains expensive, hardware-intensive, and proprietary, limiting its use to high-resource facilities. It also does not easily allow for real-time analysis and closed-loop design to link eye movements to neural activity. To address these issues, we developed an open-source eye-tracker – EyeLoop – that uses a highly efficient vectorized pupil detection method to provide uninterrupted tracking and fast online analysis with high accuracy on par with popular eye tracking modules, such as DeepLabCut. This Python-based software easily integrates custom functions using code modules, tracks a multitude of eyes, including in rodents, humans, and non-human primates, and operates at more than 1,000 frames per second on consumer-grade hardware. In this paper, we demonstrate EyeLoop’s utility in an open-loop experiment and in biomedical disease identification, two common applications of eye-tracking. With a remarkably low cost and minimum setup steps, EyeLoop makes high-speed eye-tracking widely accessible.
APA, Harvard, Vancouver, ISO, and other styles
37

Kane, Gary A., Gonçalo Lopes, Jonny L. Saunders, Alexander Mathis, and Mackenzie W. Mathis. "Real-time, low-latency closed-loop feedback using markerless posture tracking." eLife 9 (December 8, 2020). http://dx.doi.org/10.7554/elife.61909.

Full text
Abstract:
The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new <monospace>DeepLabCut-Live!</monospace> package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called <monospace>DLC-Live! GUI</monospace>), and integration into (2) <monospace>Bonsai,</monospace> and (3) <monospace>AutoPilot</monospace>. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.
APA, Harvard, Vancouver, ISO, and other styles
38

Goncharow, Paul N., and Shawn M. Beaudette. "Assessing Time-Varying Lumbar Flexion–Extension Kinematics Using Automated Pose Estimation." Journal of Applied Biomechanics, 2022, 1–6. http://dx.doi.org/10.1123/jab.2022-0041.

Full text
Abstract:
The purpose of this research was to evaluate the algorithm DeepLabCut (DLC) against a 3D motion capture system (Vicon Motion Systems Ltd) in the analysis of lumbar and elbow flexion–extension movements. Data were acquired concurrently and tracked using DLC and Vicon. A novel DLC model was trained using video data derived from a subset of participants (training group). Accuracy and precision were assessed using data derived from the training group as well as in a new set of participants (testing group). Two-way analysis of variance were used to detect significant differences between the training and testing sets, capture methods (Vicon vs DLC), as well as potential higher order interaction effect between these independent variables in the estimation of flexion–extension angles and variability. No significant differences were observed in any planar angles, nor were any higher order interactions observed between each motion capture modality with the training versus testing data sets. Bland–Altman plots were used to depict the mean bias and level of agreement between DLC and Vicon for both training and testing data sets. This research suggests that DLC-derived planar kinematics of both the elbow and lumbar spine are of acceptable accuracy and precision when compared with conventional laboratory gold standards (Vicon).
APA, Harvard, Vancouver, ISO, and other styles
39

Needham, Laurie, Murray Evans, Darren P. Cosker, et al. "The accuracy of several pose estimation methods for 3D joint centre localisation." Scientific Reports 11, no. 1 (2021). http://dx.doi.org/10.1038/s41598-021-00212-x.

Full text
Abstract:
AbstractHuman movement researchers are often restricted to laboratory environments and data capture techniques that are time and/or resource intensive. Markerless pose estimation algorithms show great potential to facilitate large scale movement studies ‘in the wild’, i.e., outside of the constraints imposed by marker-based motion capture. However, the accuracy of such algorithms has not yet been fully evaluated. We computed 3D joint centre locations using several pre-trained deep-learning based pose estimation methods (OpenPose, AlphaPose, DeepLabCut) and compared to marker-based motion capture. Participants performed walking, running and jumping activities while marker-based motion capture data and multi-camera high speed images (200 Hz) were captured. The pose estimation algorithms were applied to 2D image data and 3D joint centre locations were reconstructed. Pose estimation derived joint centres demonstrated systematic differences at the hip and knee (~ 30–50 mm), most likely due to mislabeling of ground truth data in the training datasets. Where systematic differences were lower, e.g., the ankle, differences of 1–15 mm were observed depending on the activity. Markerless motion capture represents a highly promising emerging technology that could free movement scientists from laboratory environments but 3D joint centre locations are not yet consistently comparable to marker-based motion capture.
APA, Harvard, Vancouver, ISO, and other styles
40

Weber, Rebecca Z., Geertje Mulders, Julia Kaiser, Christian Tackenberg, and Ruslan Rust. "Deep learning-based behavioral profiling of rodent stroke recovery." BMC Biology 20, no. 1 (2022). http://dx.doi.org/10.1186/s12915-022-01434-9.

Full text
Abstract:
Abstract Background Stroke research heavily relies on rodent behavior when assessing underlying disease mechanisms and treatment efficacy. Although functional motor recovery is considered the primary targeted outcome, tests in rodents are still poorly reproducible and often unsuitable for unraveling the complex behavior after injury. Results Here, we provide a comprehensive 3D gait analysis of mice after focal cerebral ischemia based on the new deep learning-based software (DeepLabCut, DLC) that only requires basic behavioral equipment. We demonstrate a high precision 3D tracking of 10 body parts (including all relevant joints and reference landmarks) in several mouse strains. Building on this rigor motion tracking, a comprehensive post-analysis (with >100 parameters) unveils biologically relevant differences in locomotor profiles after a stroke over a time course of 3 weeks. We further refine the widely used ladder rung test using deep learning and compare its performance to human annotators. The generated DLC-assisted tests were then benchmarked to five widely used conventional behavioral set-ups (neurological scoring, rotarod, ladder rung walk, cylinder test, and single-pellet grasping) regarding sensitivity, accuracy, time use, and costs. Conclusions We conclude that deep learning-based motion tracking with comprehensive post-analysis provides accurate and sensitive data to describe the complex recovery of rodents following a stroke. The experimental set-up and analysis can also benefit a range of other neurological injuries that affect locomotion.
APA, Harvard, Vancouver, ISO, and other styles
41

Gorssen, Wim, Carmen Winters, Roel Meyermans, Rudi D’Hooge, Steven Janssens, and Nadine Buys. "Estimating genetics of body dimensions and activity levels in pigs using automated pose estimation." Scientific Reports 12, no. 1 (2022). http://dx.doi.org/10.1038/s41598-022-19721-4.

Full text
Abstract:
AbstractPig breeding is changing rapidly due to technological progress and socio-ecological factors. New precision livestock farming technologies such as computer vision systems are crucial for automated phenotyping on a large scale for novel traits, as pigs’ robustness and behavior are gaining importance in breeding goals. However, individual identification, data processing and the availability of adequate (open source) software currently pose the main hurdles. The overall goal of this study was to expand pig weighing with automated measurements of body dimensions and activity levels using an automated video-analytic system: DeepLabCut. Furthermore, these data were coupled with pedigree information to estimate genetic parameters for breeding programs. We analyzed 7428 recordings over the fattening period of 1556 finishing pigs (Piétrain sire x crossbred dam) with two-week intervals between recordings on the same pig. We were able to accurately estimate relevant body parts with an average tracking error of 3.3 cm. Body metrics extracted from video images were highly heritable (61–74%) and significantly genetically correlated with average daily gain (rg = 0.81–0.92). Activity traits were low to moderately heritable (22–35%) and showed low genetic correlations with production traits and physical abnormalities. We demonstrated a simple and cost-efficient method to extract body dimension parameters and activity traits. These traits were estimated to be heritable, and hence, can be selected on. These findings are valuable for (pig) breeding organizations, as they offer a method to automatically phenotype new production and behavioral traits on an individual level.
APA, Harvard, Vancouver, ISO, and other styles
42

Skovgård, Katrine, Sebastian A. Barrientos, Per Petersson, Pär Halje, and M. Angela Cenci. "Distinctive Effects of D1 and D2 Receptor Agonists on Cortico-Basal Ganglia Oscillations in a Rodent Model of L-DOPA-Induced Dyskinesia." Neurotherapeutics, November 7, 2022. http://dx.doi.org/10.1007/s13311-022-01309-5.

Full text
Abstract:
AbstractL-DOPA-induced dyskinesia (LID) in Parkinson’s disease has been linked to oscillatory neuronal activities in the cortico-basal ganglia network. We set out to examine the pattern of cortico-basal ganglia oscillations induced by selective agonists of D1 and D2 receptors in a rat model of LID. Local field potentials were recorded in freely moving rats using large-scale electrodes targeting three motor cortical regions, dorsomedial and dorsolateral striatum, external globus pallidus, and substantial nigra pars reticulata. Abnormal involuntary movements were elicited by the D1 agonist SKF82958 or the D2 agonist sumanirole, while overall motor activity was quantified using video analysis (DeepLabCut). Both SKF82958 and sumanirole induced dyskinesia, although with significant differences in temporal course, overall severity, and body distribution. The D1 agonist induced prominent narrowband oscillations in the high gamma range (70–110 Hz) in all recorded structures except for the nigra reticulata. Additionally, the D1 agonist induced strong functional connectivity between the recorded structures and the phase analysis revealed that the primary motor cortex (forelimb area) was leading a supplementary motor area and striatum. Following treatment with the D2 agonist, narrowband gamma oscillations were detected only in forelimb motor cortex and dorsolateral striatum, while prominent oscillations in the theta band occurred in the globus pallidus and nigra reticulata. Our results reveal that the dyskinetic effects of D1 and D2 receptor agonists are associated with distinct patterns of cortico-basal ganglia oscillations, suggesting a recruitment of partially distinct networks.
APA, Harvard, Vancouver, ISO, and other styles
43

Wittek, Neslihan, Kevin Wittek, Christopher Keibel, and Onur Güntürkün. "Supervised machine learning aided behavior classification in pigeons." Behavior Research Methods, June 14, 2022. http://dx.doi.org/10.3758/s13428-022-01881-w.

Full text
Abstract:
AbstractManual behavioral observations have been applied in both environment and laboratory experiments in order to analyze and quantify animal movement and behavior. Although these observations contributed tremendously to ecological and neuroscientific disciplines, there have been challenges and disadvantages following in their footsteps. They are not only time-consuming, labor-intensive, and error-prone but they can also be subjective, which induces further difficulties in reproducing the results. Therefore, there is an ongoing endeavor towards automated behavioral analysis, which has also paved the way for open-source software approaches. Even though these approaches theoretically can be applied to different animal groups, the current applications are mostly focused on mammals, especially rodents. However, extending those applications to other vertebrates, such as birds, is advisable not only for extending species-specific knowledge but also for contributing to the larger evolutionary picture and the role of behavior within. Here we present an open-source software package as a possible initiation of bird behavior classification. It can analyze pose-estimation data generated by established deep-learning-based pose-estimation tools such as DeepLabCut for building supervised machine learning predictive classifiers for pigeon behaviors, which can be broadened to support other bird species as well. We show that by training different machine learning and deep learning architectures using multivariate time series data as input, an F1 score of 0.874 can be achieved for a set of seven distinct behaviors. In addition, an algorithm for further tuning the bias of the predictions towards either precision or recall is introduced, which allows tailoring the classifier to specific needs.
APA, Harvard, Vancouver, ISO, and other styles
44

Tien, Rex N., Anand Tekriwal, Dylan J. Calame, et al. "Deep learning based markerless motion tracking as a clinical tool for movement disorders: Utility, feasibility and early experience." Frontiers in Signal Processing 2 (September 29, 2022). http://dx.doi.org/10.3389/frsip.2022.884384.

Full text
Abstract:
Clinical assessments of movement disorders currently rely on the administration of rating scales, which, while clinimetrically validated and reliable, rely on clinicians’ subjective analyses, resulting in interrater differences. Intraoperative microelectrode recording for deep brain stimulation targeting similarly relies on clinicians’ subjective evaluations of movement-related neural activity. Digital motion tracking can improve the diagnosis, assessment, and treatment of movement disorders by generating objective, standardized measures of patients’ kinematics. Motion tracking with concurrent neural recording also enables motor neuroscience studies to elucidate the neurophysiology underlying movements. Despite these promises, motion tracking has seen limited adoption in clinical settings due to the drawbacks of conventional motion tracking systems and practical limitations associated with clinical settings. However, recent advances in deep learning based computer vision algorithms have made accurate, robust markerless motion. tracking viable in any setting where digital video can be captured. Here, we review and discuss the potential clinical applications and technical limitations of deep learning based markerless motion tracking methods with a focus on DeepLabCut (DLC), an open-source software package that has been extensively applied in animal neuroscience research. We first provide a general overview of DLC, discuss its present usage, and describe the advantages that DLC confers over other motion tracking methods for clinical use. We then present our preliminary results from three ongoing studies that demonstrate the use of DLC for 1) movement disorder patient assessment and diagnosis, 2) intraoperative motor mapping for deep brain stimulation targeting and 3) intraoperative neural and kinematic recording for basic human motor neuroscience.
APA, Harvard, Vancouver, ISO, and other styles
45

Zanon, Mirko, Bastien S. Lemaire, and Giorgio Vallortigara. "Steps towards a computational ethology: an automatized, interactive setup to investigate filial imprinting and biological predispositions." Biological Cybernetics, July 17, 2021. http://dx.doi.org/10.1007/s00422-021-00886-6.

Full text
Abstract:
AbstractSoon after hatching, the young of precocial species, such as domestic chicks or ducklings, learn to recognize their social partner by simply being exposed to it (imprinting process). Even artificial objects or stimuli displayed on monitor screens can effectively trigger filial imprinting, though learning is canalized by spontaneous preferences for animacy signals, such as certain kinds of motion or a face-like appearance. Imprinting is used as a behavioural paradigm for studies on memory formation, early learning and predispositions, as well as number and space cognition, and brain asymmetries. Here, we present an automatized setup to expose and/or test animals for a variety of imprinting experiments. The setup consists of a cage with two high-frequency screens at the opposite ends where stimuli are shown. Provided with a camera covering the whole space of the cage, the behaviour of the animal is recorded continuously. A graphic user interface implemented in Matlab allows a custom configuration of the experimental protocol, that together with Psychtoolbox drives the presentation of images on the screens, with accurate time scheduling and a highly precise framerate. The setup can be implemented into a complete workflow to analyse behaviour in a fully automatized way by combining Matlab (and Psychtoolbox) to control the monitor screens and stimuli, DeepLabCut to track animals’ behaviour, Python (and R) to extract data and perform statistical analyses. The automated setup allows neuro-behavioural scientists to perform standardized protocols during their experiments, with faster data collection and analyses, and reproducible results.
APA, Harvard, Vancouver, ISO, and other styles
46

Lonini, Luca, Yaejin Moon, Kyle Embry, et al. "Video-Based Pose Estimation for Gait Analysis in Stroke Survivors during Clinical Assessments: A Proof-of-Concept Study." Digital Biomarkers, January 13, 2022, 9–18. http://dx.doi.org/10.1159/000520732.

Full text
Abstract:
Recent advancements in deep learning have produced significant progress in markerless human pose estimation, making it possible to estimate human kinematics from single camera videos without the need for reflective markers and specialized labs equipped with motion capture systems. Such algorithms have the potential to enable the quantification of clinical metrics from videos recorded with a handheld camera. Here we used DeepLabCut, an open-source framework for markerless pose estimation, to fine-tune a deep network to track 5 body keypoints (hip, knee, ankle, heel, and toe) in 82 below-waist videos of 8 patients with stroke performing overground walking during clinical assessments. We trained the pose estimation model by labeling the keypoints in 2 frames per video and then trained a convolutional neural network to estimate 5 clinically relevant gait parameters (cadence, double support time, swing time, stance time, and walking speed) from the trajectory of these keypoints. These results were then compared to those obtained from a clinical system for gait analysis (GAITRite®, CIR Systems). Absolute accuracy (mean error) and precision (standard deviation of error) for swing, stance, and double support time were within 0.04 ± 0.11 s; Pearson’s correlation with the reference system was moderate for swing times (r = 0.4–0.66), but stronger for stance and double support time (r = 0.93–0.95). Cadence mean error was −0.25 steps/min ± 3.9 steps/min (r = 0.97), while walking speed mean error was −0.02 ± 0.11 m/s (r = 0.92). These preliminary results suggest that single camera videos and pose estimation models based on deep networks could be used to quantify clinically relevant gait metrics in individuals poststroke, even while using assistive devices in uncontrolled environments. Such development opens the door to applications for gait analysis both inside and outside of clinical settings, without the need of sophisticated equipment.
APA, Harvard, Vancouver, ISO, and other styles
47

Doornweerd, Jan Erik, Gert Kootstra, Roel F. Veerkamp, et al. "Across-Species Pose Estimation in Poultry Based on Images Using Deep Learning." Frontiers in Animal Science 2 (December 15, 2021). http://dx.doi.org/10.3389/fanim.2021.791290.

Full text
Abstract:
Animal pose-estimation networks enable automated estimation of key body points in images or videos. This enables animal breeders to collect pose information repeatedly on a large number of animals. However, the success of pose-estimation networks depends in part on the availability of data to learn the representation of key body points. Especially with animals, data collection is not always easy, and data annotation is laborious and time-consuming. The available data is therefore often limited, but data from other species might be useful, either by itself or in combination with the target species. In this study, the across-species performance of animal pose-estimation networks and the performance of an animal pose-estimation network trained on multi-species data (turkeys and broilers) were investigated. Broilers and turkeys were video recorded during a walkway test representative of the situation in practice. Two single-species and one multi-species model were trained by using DeepLabCut and tested on two single-species test sets. Overall, the within-species models outperformed the multi-species model, and the models applied across species, as shown by a lower raw pixel error, normalized pixel error, and higher percentage of keypoints remaining (PKR). The multi-species model had slightly higher errors with a lower PKR than the within-species models but had less than half the number of annotated frames available from each species. Compared to the single-species broiler model, the multi-species model achieved lower errors for the head, left foot, and right knee keypoints, although with a lower PKR. Across species, keypoint predictions resulted in high errors and low to moderate PKRs and are unlikely to be of direct use for pose and gait assessments. A multi-species model may reduce annotation needs without a large impact on performance for pose assessment, however, with the recommendation to only be used if the species are comparable. If a single-species model exists it could be used as a pre-trained model for training a new model, and possibly require a limited amount of new data. Future studies should investigate the accuracy needed for pose and gait assessments and estimate genetic parameters for the new phenotypes before pose-estimation networks can be applied in practice.
APA, Harvard, Vancouver, ISO, and other styles
48

Lecomte, Charly G., Johannie Audet, Jonathan Harnie, and Alain Frigon. "A Validation of Supervised Deep Learning for Gait Analysis in the Cat." Frontiers in Neuroinformatics 15 (August 19, 2021). http://dx.doi.org/10.3389/fninf.2021.712623.

Full text
Abstract:
Gait analysis in cats and other animals is generally performed with custom-made or commercially developed software to track reflective markers placed on bony landmarks. This often involves costly motion tracking systems. However, deep learning, and in particular DeepLabCutTM (DLC), allows motion tracking without requiring placing reflective markers or an expensive system. The purpose of this study was to validate the accuracy of DLC for gait analysis in the adult cat by comparing results obtained with DLC and a custom-made software (Expresso) that has been used in several cat studies. Four intact adult cats performed tied-belt (both belts at same speed) and split-belt (belts operating at different speeds) locomotion at different speeds and left-right speed differences on a split-belt treadmill. We calculated several kinematic variables, such as step/stride lengths and joint angles from the estimates made by the two software and assessed the agreement between the two measurements using intraclass correlation coefficient or Lin’s concordance correlation coefficient as well as Pearson’s correlation coefficients. The results showed that DLC is at least as precise as Expresso with good to excellent agreement for all variables. Indeed, all 12 variables showed an agreement above 0.75, considered good, while nine showed an agreement above 0.9, considered excellent. Therefore, deep learning, specifically DLC, is valid for measuring kinematic variables during locomotion in cats, without requiring reflective markers and using a relatively low-cost system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!