To see the other types of publications on this topic, follow the link: Condat algorithm.

Journal articles on the topic 'Condat algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 journal articles for your research on the topic 'Condat algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

El Gueddari, Loubna, Chaithya Giliyar Radhakrishna, Emilie Chouzenoux, and Philippe Ciuciu. "Calibration-Less Multi-Coil Compressed Sensing Magnetic Resonance Image Reconstruction Based on OSCAR Regularization." Journal of Imaging 7, no. 3 (2021): 58. http://dx.doi.org/10.3390/jimaging7030058.

Full text
Abstract:
Over the last decade, the combination of compressed sensing (CS) with acquisition over multiple receiver coils in magnetic resonance imaging (MRI) has allowed the emergence of faster scans while maintaining a good signal-to-noise ratio (SNR). Self-calibrating techniques, such as ESPiRIT, have become the standard approach to estimating the coil sensitivity maps prior to the reconstruction stage. In this work, we proceed differently and introduce a new calibration-less multi-coil CS reconstruction method. Calibration-less techniques no longer require the prior extraction of sensitivity maps to perform multi-coil image reconstruction but usually alternate estimation sensitivity map estimation and image reconstruction. Here, to get rid of the nonconvexity of the latter approach we reconstruct as many MR images as the number of coils. To compensate for the ill-posedness of this inverse problem, we leverage structured sparsity of the multi-coil images in a wavelet transform domain while adapting to variations in SNR across coils owing to the OSCAR (octagonal shrinkage and clustering algorithm for regression) regularization. Coil-specific complex-valued MR images are thus obtained by minimizing a convex but nonsmooth objective function using the proximal primal-dual Condat-Vù algorithm. Comparison and validation on retrospective Cartesian and non-Cartesian studies based on the Brain fastMRI data set demonstrate that the proposed reconstruction method outperforms the state-of-the-art (ℓ1-ESPIRiT, calibration-less AC-LORAKS and CaLM methods) significantly on magnitude images for the T1 and FLAIR contrasts. Additionally, further validation operated on 8 to 20-fold prospectively accelerated high-resolution ex vivo human brain MRI data collected at 7 Tesla confirms the retrospective results. Overall, OSCAR-based regularization preserves phase information more accurately (both visually and quantitatively) compared to other approaches, an asset that can only be assessed on real prospective experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Lo, Chihsiung, and P. Y. Papalambros. "A Convex Cutting Plane Algorithm for Global Solution of Generalized Polynomial Optimal Design Models." Journal of Mechanical Design 118, no. 1 (1996): 82–88. http://dx.doi.org/10.1115/1.2826861.

Full text
Abstract:
Global optimization algorithms for generalized polynomial design models using a global feasible search approach was discussed in a previous article. A new convex cutting plane algorithm (CONCUT) based on global feasible search and with improved performance is presented in this sequel article. Computational results of the CONCUT algorithm compared to one using linear cuts (LINCUT) are given for various test problems. A speed reducer design example illustrates the application of the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, J., and M. J. Druzdzel. "AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks." Journal of Artificial Intelligence Research 13 (October 1, 2000): 155–88. http://dx.doi.org/10.1613/jair.764.

Full text
Abstract:
Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung & Chang, 1989; Shachter & Peot, 1989) and self-importance sampling (Shachter & Peot, 1989). We used in our tests three large real Bayesian network models available to the scientific community: the CPCS network (Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, & Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, & Druzdzel, 1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always performed better than the other two algorithms, in the majority of the test cases it achieved orders of magnitude improvement in precision of the results. Improvement in speed given a desired precision is even more dramatic, although we are unable to report numerical results here, as the other algorithms almost never achieved the precision reached even by the first few iterations of the AIS-BN algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Chungu, Liangwei Yang, Xiao Chen, Duanbing Chen, Hui Gao, and Jing Ma. "Influential Nodes Identification in Complex Networks via Information Entropy." Entropy 22, no. 2 (2020): 242. http://dx.doi.org/10.3390/e22020242.

Full text
Abstract:
Identifying a set of influential nodes is an important topic in complex networks which plays a crucial role in many applications, such as market advertising, rumor controlling, and predicting valuable scientific publications. In regard to this, researchers have developed algorithms from simple degree methods to all kinds of sophisticated approaches. However, a more robust and practical algorithm is required for the task. In this paper, we propose the EnRenew algorithm aimed to identify a set of influential nodes via information entropy. Firstly, the information entropy of each node is calculated as initial spreading ability. Then, select the node with the largest information entropy and renovate its l-length reachable nodes’ spreading ability by an attenuation factor, repeat this process until specific number of influential nodes are selected. Compared with the best state-of-the-art benchmark methods, the performance of proposed algorithm improved by 21.1%, 7.0%, 30.0%, 5.0%, 2.5%, and 9.0% in final affected scale on CEnew, Email, Hamster, Router, Condmat, and Amazon network, respectively, under the Susceptible-Infected-Recovered (SIR) simulation model. The proposed algorithm measures the importance of nodes based on information entropy and selects a group of important nodes through dynamic update strategy. The impressive results on the SIR simulation model shed light on new method of node mining in complex networks for information spreading and epidemic prevention.
APA, Harvard, Vancouver, ISO, and other styles
5

Lan, Divon, Raymond Tobler, Yassine Souilmi, and Bastien Llamas. "genozip: a fast and efficient compression tool for VCF files." Bioinformatics 36, no. 13 (2020): 4091–92. http://dx.doi.org/10.1093/bioinformatics/btaa290.

Full text
Abstract:
Abstract Motivation genozip is a new lossless compression tool for Variant Call Format (VCF) files. By applying field-specific algorithms and fully utilizing the available computational hardware, genozip achieves the highest compression ratios amongst existing lossless compression tools known to the authors, at speeds comparable with the fastest multi-threaded compressors. Availability and implementation genozip is freely available to non-commercial users. It can be installed via conda-forge, Docker Hub, or downloaded from github.com/divonlan/genozip. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
6

Cha, Ho-Young, Han Sik Ryu, Juhwan Choi, and Jin Hwan Choi. "58326 A STUDY ON THE STICK AND SLIP ALGORITHM IN CONTACT PROBLEMS OF MULTIBODY SYSTEM DYNAMICS(Contact, Impact, and Friction)." Proceedings of the Asian Conference on Multibody Dynamics 2010.5 (2010): _58326–1_—_58326–8_. http://dx.doi.org/10.1299/jsmeacmd.2010.5._58326-1_.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hofer, Birgit, and Franz Wotawa. "Combining Slicing and Constraint Solving for Better Debugging: The CONBAS Approach." Advances in Software Engineering 2012 (December 31, 2012): 1–18. http://dx.doi.org/10.1155/2012/628571.

Full text
Abstract:
Although slices provide a good basis for analyzing programs during debugging, they lack in their capabilities providing precise information regarding the most likely root causes of faults. Hence, a lot of work is left to the programmer during fault localization. In this paper, we present an approach that combines an advanced dynamic slicing method with constraint solving in order to reduce the number of delivered fault candidates. The approach is called Constraints Based Slicing (CONBAS). The idea behind CONBAS is to convert an execution trace of a failing test case into its constraint representation and to check if it is possible to find values for all variables in the execution trace so that there is no contradiction with the test case. For doing so, we make use of the correctness and incorrectness assumptions behind a diagnosis, the given failing test case. Beside the theoretical foundations and the algorithm, we present empirical results and discuss future research. The obtained empirical results indicate an improvement of about 28% for the single fault and 50% for the double-fault case compared to dynamic slicing approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Jongil, Su Woong Lee, Bo Ram Cho, Ki Hoon Kwon, Hyun Min Oh, and Min Young Kim. "Contact Position Estimation Algorithm using Image-based Areal Touch Sensor based on Artificial Neural Network Prediction." Journal of the Institute of Industrial Applications Engineers 6, no. 2 (2018): 100–106. http://dx.doi.org/10.12792/jiiae.6.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kubrak, Anatoliy, Lesya Ladieva, Anatoliy Burban, and Roman Dubik. "Experimental method of contact membrane distillation process research." Chemistry & Chemical Technology 2, no. 2 (2008): 153–56. http://dx.doi.org/10.23939/chcht02.02.153.

Full text
Abstract:
A laboratory setup for the research of contact membrane distillations (CMD) process is described; a non-standard algorithm of defining dynamic characteristics of a channel, which are directly inaccessible for measurement, by its transitive characteristics that can be represented as a chain of consistently connected elements, as well as separate elements of this chain, is offered.
APA, Harvard, Vancouver, ISO, and other styles
10

Peterjohn, Bruce G. "Some Considerations on the Use of Ecological Models to Predict Species' Geographic Distributions." Condor 103, no. 3 (2001): 661–63. http://dx.doi.org/10.1093/condor/103.3.661.

Full text
Abstract:
Abstract Peterson (2001) used Genetic Algorithm for Rule-set Prediction (GARP) models to predict distribution patterns from Breeding Bird Survey (BBS) data. Evaluations of these models should consider inherent limitations of BBS data: (1) BBS methods may not sample species and habitats equally; (2) using BBS data for both model development and testing may overlook poor fit of some models; and (3) BBS data may not provide the desired spatial resolution or capture temporal changes in species distributions. The predictive value of GARP models requires additional study, especially comparisons with distribution patterns from independent data sets. When employed at appropriate temporal and geographic scales, GARP models show considerable promise for conservation biology applications but provide limited inferences concerning processes responsible for the observed patterns. Algunas Consideraciones del Uso de Modelos Ecológicos para Predecir la Distribución Geográfica de las Especies Resumen. Peterson (2001) usó Modelos de Algoritmos Genéticos para la Predicción de Reglas (GARP) para predecir los patrones de distribución de los datos del Censo de Aves Nidificantes (BBS). Las evaluaciones de estos modelos deberían considerar las limitaciones propias de los datos del BBS: (1) los métodos del BBS pueden meustrear especies y hábitats de modo diferente; (2) usar los datos del BBS tanto para desarrollar los modelos como para probarlos puede evadir el pobre desempe;tzno de algunos modelos; y (3) los datos del BBS pueden no proveer la resolución espacial deseada y capturar los cambios temporales en la distribución de especies. El valor predictivo de los modelos GARP requiere estudios adicionales, especialmente comparaciones con patrones de distribución obtenidos de bases de datos independientes. Cuando los modelos GARP son empleados a las escalas temporales y geográficas apropiadas muestran aplicaciones promisorias para biología de la conservación, pero proveen inferencias limiadas sobre los procesos responsables de los patrones obervados.
APA, Harvard, Vancouver, ISO, and other styles
11

Lan, Divon, Ray Tobler, Yassine Souilmi, and Bastien Llamas. "Genozip: a universal extensible genomic data compressor." Bioinformatics 37, no. 16 (2021): 2225–30. http://dx.doi.org/10.1093/bioinformatics/btab102.

Full text
Abstract:
Abstract We present Genozip, a universal and fully featured compression software for genomic data. Genozip is designed to be a general-purpose software and a development framework for genomic compression by providing five core capabilities—universality (support for all common genomic file formats), high compression ratios, speed, feature-richness and extensibility. Genozip delivers high-performance compression for widelyused genomic data formats in genomics research, namely FASTQ, SAM/BAM/CRAM, VCF, GVF, FASTA, PHYLIP and 23andMe formats. Our test results show that Genozip is fast and achieves greatly improved compression ratios, even when the files are already compressed. Further, Genozip is architected with a separation of the Genozip Framework from file-format-specific Segmenters and data-type-specific Codecs. With this, we intend for Genozip to be a general-purpose compression platform where researchers can implement compression for additional file formats, as well as new codecs for data types or fields within files, in the future. We anticipate that this will ultimately increase the visibility and adoption of these algorithms by the user community, thereby accelerating further innovation in this space. Availability and implementation Genozip is written in C. The code is open-source and available on http://www.genozip.com. The package is free for non-commercial use. It is distributed through the Conda package manager, github, and as a Docker container on DockerHub. Genozip is tested on Linux, Mac and Windows. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
12

Peterson, A. Townsend. "Predicting Species' Geographic Distributions Based on Ecological Niche Modeling." Condor 103, no. 3 (2001): 599–605. http://dx.doi.org/10.1093/condor/103.3.599.

Full text
Abstract:
Abstract Recent developments in geographic information systems and their application to conservation biology open doors to exciting new synthetic analyses. Exploration of these possibilities, however, is limited by the quality of information available: most biodiversity data are incomplete and characterized by biased sampling. Inferential procedures that provide robust and reliable predictions of species' geographic distributions thus become critical to biodiversity analyses. In this contribution, models of species' ecological niches are developed using an artificial-intelligence algorithm, and projected onto geography to predict species' distributions. To test the validity of this approach, I used North American Breeding Bird Survey data, with large sample sizes for many species. I omitted randomly selected states from model building, and tested models using the omitted states. For the 34 species tested, all predictions were highly statistically significant (all P < 0.001), indicating excellent predictive ability. This inferential capacity opens doors to many synthetic analyses based on primary point occurrence data. Predicción de Áreas de Distribución de Especies con Pase en Modelaje de Nichos Ecológicos Resumen. Avances recientes en los sistemas de información geográfica y su aplicación en la biología de conservación presentan la posibilidad de analisis nuevos y sintéticos. La exploración de estas posibilidades, de todas formas, se limita por la calidad de información disponible: la gran mayoria de datos respecto a la diversidad biológica son incompletos y sesgados. Por eso, procedimientos de inferencia que proveen predicciones robustas y confiables de distribuciones de especies se hacen importantes para los análisis de la biodiversidad. En esta contribución, se desarrollan modelos de los nichos ecológicos por medio de un algoritmo de inteligencia artificial, y los proyeccionamos en la geografía para predecir las distribuciones geográficas de especies. Para probar el método, se usan los datos del North American Breeding Bird Survey, con tamaños de muestra grande. Se construyeron modelos con base en 30 estados unidenses seleccionados al azar, y se probaron los modelos con base en los 20 estados restantes. De las 34 especies que se analizaron, todos mostraron un alto grado de significanza estadística (todos P < 0.001), lo cual indica un alto grado de predictividad. Esta capacidad de inferencia abre la puerta a varios analisis sintéticos con base en puntos conocidos de ocurrencia de especies.
APA, Harvard, Vancouver, ISO, and other styles
13

Rakočević, Stevan, Martin Ćalasan, and Tatjana Konjić. "Analysis of the voltage profile and power losses in distribution network with renewable energy sources using CONOPT solver." Tehnika 75, no. 6 (2020): 749–55. http://dx.doi.org/10.5937/tehnika2006749r.

Full text
Abstract:
In this paper, CONOPT solver, embedded in program GAMS, is proposed for optimal power flow analysis in distribution network with renewable energy sources. CONOPT solver possibilities have been tested on IEEE 33 test system solving a problem o f minimizing active power losses in the network. Locations and sizes o f renewable energy sources were taken form available literature. The results obtained using CONOPT solver have been compared with results obtained by using metaheuristic and hybrid algorithms. It is shown that the CONOPT solver gives better results in terms o f minimum values o f active power losses.
APA, Harvard, Vancouver, ISO, and other styles
14

Wolff, Joachim, Rolf Backofen, and Björn Grüning. "Robust and efficient single-cell Hi-C clustering with approximate k-nearest neighbor graphs." Bioinformatics, May 22, 2021. http://dx.doi.org/10.1093/bioinformatics/btab394.

Full text
Abstract:
Abstract Motivation Hi-C technology provides insights into the 3D organization of the chromatin, and the single-cell Hi-C method enables researchers to gain knowledge about the chromatin state in individual cell levels. Single-cell Hi-C interaction matrices are high dimensional and very sparse. To cluster thousands of single-cell Hi-C interaction matrices, they are flattened and compiled into one matrix. Depending on the resolution, this matrix can have a few million or even billions of features; therefore, computations can be memory intensive. We present a single-cell Hi-C clustering approach using an approximate nearest neighbors method based on locality-sensitive hashing to reduce the dimensions and the computational resources. Results The presented method can process a 10 kb single-cell Hi-C dataset with 2600 cells and needs 40 GB of memory, while competitive approaches are not computable even with 1 TB of memory. It can be shown that the differentiation of the cells by their chromatin folding properties and, therefore, the quality of the clustering of single-cell Hi-C data is advantageous compared to competitive algorithms. Availability and implementation The presented clustering algorithm is part of the scHiCExplorer, is available on Github https://github.com/joachimwolff/scHiCExplorer, and as a conda package via the bioconda channel. The approximate nearest neighbors implementation is available via https://github.com/joachimwolff/sparse-neighbors-search and as a conda package via the bioconda channel. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
15

"Identification of Contact Lens Types for Refractive Errors Using Iterative Dicotomiser3 (ID3) Algorithm." International Journal of Science and Research (IJSR) 5, no. 4 (2016): 257–61. http://dx.doi.org/10.21275/v5i4.nov162540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Liber, Julian A., Gregory Bonito, and Gian Maria Niccolò Benucci. "CONSTAX2: improved taxonomic classification of environmental DNA markers." Bioinformatics, May 7, 2021. http://dx.doi.org/10.1093/bioinformatics/btab347.

Full text
Abstract:
Abstract Summary CONSTAX—the CONSensus TAXonomy classifier—was developed for accurate and reproducible taxonomic annotation of fungal rDNA amplicon sequences and is based upon a consensus approach of RDP, SINTAX and UTAX algorithms. CONSTAX2 extends these features to classify prokaryotes as well as eukaryotes and incorporates BLAST-based classifiers to reduce classification errors. Additionally, CONSTAX2 implements a conda-installable command-line tool with improved classification metrics, faster training, multithreading support, capacity to incorporate external taxonomic databases and new isolate matching and high-level taxonomy tools, replete with documentation and example tutorials. Availability and implementation CONSTAX2 is available at https://github.com/liberjul/CONSTAXv2, and is packaged for Linux and MacOS from Bioconda with use under the MIT License. A tutorial and documentation are available at https://constax.readthedocs.io/en/latest/. Data and scripts associated with the manuscript are available at https://github.com/liberjul/CONSTAXv2_ms_code. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
17

Schmitz, D., S. Nooij, T. Janssens, et al. "A43 Translational research: NGS metagenomics into clinical diagnostics." Virus Evolution 5, Supplement_1 (2019). http://dx.doi.org/10.1093/ve/vez002.042.

Full text
Abstract:
Abstract As research next-generation sequencing (NGS) metagenomic pipelines transition to clinical diagnostics, the user-base changes from bioinformaticians to biologists, medical doctors, and lab-technicians. Besides the obvious need for benchmarking and assessment of diagnostic outcomes of the pipelines and tools, other focus points remain: reproducibility, data immutability, user-friendliness, portability/scalability, privacy, and a clear audit trail. We have a research metagenomics pipeline that takes raw fastq files and produces annotated contigs, but it is too complicated for non-bioinformaticians. Here, we present preliminary findings in adapting this pipeline for clinical diagnostics. We used information available on relevant fora (www.bioinfo-core.org) and experiences and publications from colleague bioinformaticians in other institutes (COMPARE, UBC, and LUMC). From this information, a robust and user-friendly storage and analysis workflow was designed for non-bioinformaticians in a clinical setting. Via Conda [https://conda.io] and Docker containers [http://www.docker.com], we made our disparate pipeline processes self-contained and reproducible. Furthermore, we moved all pipeline settings into a separate JSON file. After every analysis, the pipeline settings and virtual-environment recipes will be archived (immutably) under a persistent unique identifier. This allows long-term precise reproducibility. Likewise, after every run the raw data and final products will be automatically archived, complying with data retention laws/guidelines. All the disparate processes in the pipeline are parallelized and automated via Snakemake1 (i.e. end-users need no coding skills). In addition, interactive web-reports such as MultiQC [http://multiqc.info] and Krona2 are generated automatically. By combining Snakemake, Conda, and containers, our pipeline is highly portable and easily scaled up for outbreak situations, or scaled down to reduce costs. Since patient privacy is a concern, our pipeline automatically removes human genetic data. Moreover, all source code will be stored on an internal Gitlab server, and, combined with the archived data, ensures a clear audit trail. Nevertheless, challenges remain: (1) reproducible reference databases, e.g. being able to revert to an older version to reproduce old analyses. (2) A user-friendly GUI. (3) Connecting the pipeline and NGS data to in-house LIMS. (4) Efficient long-term storage, e.g. lossless compression algorithms. Nevertheless, this work represents a step forward in making user-friendly clinical diagnostic workflows.
APA, Harvard, Vancouver, ISO, and other styles
18

Weißbecker, Christina, Beatrix Schnabel, and Anna Heintz-Buschart. "Dadasnake, a Snakemake implementation of DADA2 to process amplicon sequencing data for microbial ecology." GigaScience 9, no. 12 (2020). http://dx.doi.org/10.1093/gigascience/giaa135.

Full text
Abstract:
Abstract Background Amplicon sequencing of phylogenetic marker genes, e.g., 16S, 18S, or ITS ribosomal RNA sequences, is still the most commonly used method to determine the composition of microbial communities. Microbial ecologists often have expert knowledge on their biological question and data analysis in general, and most research institutes have computational infrastructures to use the bioinformatics command line tools and workflows for amplicon sequencing analysis, but requirements of bioinformatics skills often limit the efficient and up-to-date use of computational resources. Results We present dadasnake, a user-friendly, 1-command Snakemake pipeline that wraps the preprocessing of sequencing reads and the delineation of exact sequence variants by using the favorably benchmarked and widely used DADA2 algorithm with a taxonomic classification and the post-processing of the resultant tables, including hand-off in standard formats. The suitability of the provided default configurations is demonstrated using mock community data from bacteria and archaea, as well as fungi. Conclusions By use of Snakemake, dadasnake makes efficient use of high-performance computing infrastructures. Easy user configuration guarantees flexibility of all steps, including the processing of data from multiple sequencing platforms. It is easy to install dadasnake via conda environments. dadasnake is available at https://github.com/a-h-b/dadasnake.
APA, Harvard, Vancouver, ISO, and other styles
19

Keyel, Edward R., Matthew A. Etterson, Gerald J. Niemi, et al. "Feather mercury increases with feeding at higher trophic levels in two species of migrant raptors, Merlin (Falco columbarius) and Sharp-shinned Hawk (Accipiter striatus)." Condor 122, no. 2 (2020). http://dx.doi.org/10.1093/condor/duz069.

Full text
Abstract:
Abstract Mercury (Hg) is a toxic heavy metal that, when methylated to form methylmercury (MeHg), bioaccumulates in exposed animals and biomagnifies through food webs. The purpose of this study was to assess Hg concentrations in raptors migrating through the upper midwestern USA. From 2009 to 2012, 966 raptors of 11 species were captured at Hawk Ridge, Duluth, Minnesota, USA. Breast feathers were sampled to determine the concentration of total Hg. Mean Hg concentrations ranged from 0.11 to 3.46 μg g−1 fresh weight across species and were generally higher in raptors that feed on birds in comparison with those that feed on mammals. To evaluate the effect of dietary sources on Hg biomagnification, carbon and nitrogen stable isotope ratios were measured in feathers of the 2 species with the highest Hg concentrations, Merlin (Falco columbarius) and Sharp-shinned Hawk (Accipiter striatus). Measured δ 13C values were similar in both species and indicated a primarily terrestrial-derived diet, whereas δ 15N values suggested that individual Merlin and Sharp-shinned Hawk feeding at higher trophic levels accumulated higher concentrations of Hg. The risk to birds associated with measured levels of feather Hg was evaluated by calculating blood-equivalent values using an established algorithm. Predicted blood values were then compared to heuristic risk categories synthesized across avian orders. This analysis suggested that while some Merlin and Sharp-shinned Hawk were at moderate risk to adverse effects of MeHg, most of the sampled birds were at negligible or low risk.
APA, Harvard, Vancouver, ISO, and other styles
20

Labbé, Geneviève, Peter Kruczkiewicz, James Robertson, et al. "Rapid and accurate SNP genotyping of clonal bacterial pathogens with BioHansel." Microbial Genomics 7, no. 9 (2021). http://dx.doi.org/10.1099/mgen.0.000651.

Full text
Abstract:
Hierarchical genotyping approaches can provide insights into the source, geography and temporal distribution of bacterial pathogens. Multiple hierarchical SNP genotyping schemes have previously been developed so that new isolates can rapidly be placed within pre-computed population structures, without the need to rebuild phylogenetic trees for the entire dataset. This classification approach has, however, seen limited uptake in routine public health settings due to analytical complexity and the lack of standardized tools that provide clear and easy ways to interpret results. The BioHansel tool was developed to provide an organism-agnostic tool for hierarchical SNP-based genotyping. The tool identifies split k-mers that distinguish predefined lineages in whole genome sequencing (WGS) data using SNP-based genotyping schemes. BioHansel uses the Aho-Corasick algorithm to type isolates from assembled genomes or raw read sequence data in a matter of seconds, with limited computational resources. This makes BioHansel ideal for use by public health agencies that rely on WGS methods for surveillance of bacterial pathogens. Genotyping results are evaluated using a quality assurance module which identifies problematic samples, such as low-quality or contaminated datasets. Using existing hierarchical SNP schemes for Mycobacterium tuberculosis and Salmonella Typhi, we compare the genotyping results obtained with the k-mer-based tools BioHansel and SKA, with those of the organism-specific tools TBProfiler and genotyphi, which use gold-standard reference-mapping approaches. We show that the genotyping results are fully concordant across these different methods, and that the k-mer-based tools are significantly faster. We also test the ability of the BioHansel quality assurance module to detect intra-lineage contamination and demonstrate that it is effective, even in populations with low genetic diversity. We demonstrate the scalability of the tool using a dataset of ~8100 S. Typhi public genomes and provide the aggregated results of geographical distributions as part of the tool’s output. BioHansel is an open source Python 3 application available on PyPI and Conda repositories and as a Galaxy tool from the public Galaxy Toolshed. In a public health context, BioHansel enables rapid and high-resolution classification of bacterial pathogens with low genetic diversity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!