Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Interface alignment tool.

Articles de revues sur le sujet « Interface alignment tool »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Interface alignment tool ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Hall, Nicholas J., David Miguel Susano Pinto et Ian M. Dobbie. « BeamDelta : simple alignment tool for optical systems ». Wellcome Open Research 4 (6 décembre 2019) : 194. http://dx.doi.org/10.12688/wellcomeopenres.15576.1.

Texte intégral
Résumé :
BeamDelta is a tool to help align optical systems. It greatly assists in assembling bespoke optical systems by providing a live view of the current laser beam position and a reference position. Even a simple optical setup has multiple degrees of freedom that affect the alignment of beam paths. These degrees of freedom rise exponentially with the complexity of the system. The process of aligning all the optical components for a specific system is often esoteric and poorly documented, if it is documented at all. Alignment methods used often rely on visual inspection of beams impinging on pinholes in the beam path, typically requiring an experienced operator staring at diffuse reflections for extended periods of time. This can lead to a decline in accuracy due to eye strain, flash blindness as well as symptoms such as headaches and, possibly, more serious retinal damage. Here we present BeamDelta a simple alignment tool and accompanying software interface which allows users to obtain accurate alignment as well as removing the necessity of staring at diffuse laser reflections. BeamDelta is a robust alignment tool as it doesn't require any precise alignment itself.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tu, Shin-Lin, Jeannette Staheli, Colum McClay, Kathleen McLeod, Timothy Rose et Chris Upton. « Base-By-Base Version 3 : New Comparative Tools for Large Virus Genomes ». Viruses 10, no 11 (15 novembre 2018) : 637. http://dx.doi.org/10.3390/v10110637.

Texte intégral
Résumé :
Base-By-Base is a comprehensive tool for the creation and editing of multiple sequence alignments that is coded in Java and runs on multiple platforms. It can be used with gene and protein sequences as well as with large viral genomes, which themselves can contain gene annotations. This report describes new features added to Base-By-Base over the last 7 years. The two most significant additions are: (1) The recoding and inclusion of “consensus-degenerate hybrid oligonucleotide primers” (CODEHOP), a popular tool for the design of degenerate primers from a multiple sequence alignment of proteins; and (2) the ability to perform fuzzy searches within the columns of sequence data in multiple sequence alignments to determine the distribution of sequence variants among the sequences. The intuitive interface focuses on the presentation of results in easily understood visualizations and providing the ability to annotate the sequences in a multiple alignment with analytic and user data.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Finney, Richard P., Qing-Rong Chen, Cu V. Nguyen, Chih Hao Hsu, Chunhua Yan, Ying Hu, Massih Abawi, Xiaopeng Bian et Daoud M. Meerzaman. « Alview : Portable Software for Viewing Sequence Reads in BAM Formatted Files ». Cancer Informatics 14 (janvier 2015) : CIN.S26470. http://dx.doi.org/10.4137/cin.s26470.

Texte intégral
Résumé :
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview . The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview .
Styles APA, Harvard, Vancouver, ISO, etc.
4

West, Ruth, Jeff Burke, Cheryl Kerfeld, Eitan Mendelowitz, Thomas Holton, J. P. Lewis, Ethan Drucker et Weihong Yan. « Both and Neither : in silico v1.0, Ecce Homology ». Leonardo 38, no 4 (août 2005) : 286–93. http://dx.doi.org/10.1162/0024094054762089.

Texte intégral
Résumé :
Ecce Homology, a physically interactive new-media work, visualizes genetic data as calligraphic forms. A novel computer-vision user interface allows multiple participants, through their movement in the installation space, to select genes from the human genome for visualizing the Basic Local Alignment Search Tool (BLAST), a primary algorithm in comparative genomics. Ecce Homology was successfully installed in the UCLA Fowler Museum, 6 November 2003–4 January 2004.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Randhawa, Gurjit S., Kathleen A. Hill et Lila Kari. « MLDSP-GUI : an alignment-free standalone tool with an interactive graphical user interface for DNA sequence comparison and analysis ». Bioinformatics 36, no 7 (13 décembre 2019) : 2258–59. http://dx.doi.org/10.1093/bioinformatics/btz918.

Texte intégral
Résumé :
Abstract Summary Machine Learning with Digital Signal Processing and Graphical User Interface (MLDSP-GUI) is an open-source, alignment-free, ultrafast, computationally lightweight, and standalone software tool with an interactive GUI for comparison and analysis of DNA sequences. MLDSP-GUI is a general-purpose tool that can be used for a variety of applications such as taxonomic classification, disease classification, virus subtype classification, evolutionary analyses, among others. Availability and implementation MLDSP-GUI is open-source, cross-platform compatible, and is available under the terms of the Creative Commons Attribution 4.0 International license (http://creativecommons.org/licenses/by/4.0/). The executable and dataset files are available at https://sourceforge.net/projects/mldsp-gui/. Supplementary information Supplementary data are available at Bioinformatics online.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Shirshikov, Fedor V., Yuri A. Pekov et Konstantin A. Miroshnikov. « MorphoCatcher : a multiple-alignment based web tool for target selection and designing taxon-specific primers in the loop-mediated isothermal amplification method ». PeerJ 7 (26 avril 2019) : e6801. http://dx.doi.org/10.7717/peerj.6801.

Texte intégral
Résumé :
Background Advantages of loop-mediated isothermal amplification in molecular diagnostics allow to consider the method as a promising technology of nucleic acid detection in agriculture and medicine. A bioinformatics tool that provides rapid screening and selection of target nucleotide sequences with subsequent taxon-specific primer design toward polymorphic orthologous genes, not only unique or conserved common regions of genome, would contribute to the development of more specific and sensitive diagnostic assays. However, considering features of the original software for primer selection, also known as the PrimerExplorer (Eiken Chemical Co. LTD, Tokyo, Japan), the taxon-specific primer design using multiple sequence alignments of orthologs or even viral genomes with conservative architecture is still complicated. Findings Here, MorphoCatcher is introduced as a fast and simple web plugin for PrimerExplorer with a clear interface. It enables an execution of multiple-alignment based search of taxon-specific mutations, visual screening and selection of target sequences, and easy-to-start specific primer design using the PrimerExplorer software. The combination of MorphoCatcher and PrimerExplorer allows to perform processing of the multiple alignments of orthologs for informative sliding-window plot analysis, which is used to identify the sequence regions with a high density of taxon-specific mutations and cover them by the primer ends for better specificity of amplification. Conclusions We hope that this new bioinformatics tool developed for target selection and taxon-specific primer design, called the MorphoCatcher, will gain more popularity of the loop-mediated isothermal amplification method for molecular diagnostics community. MorphoCatcher is a simple web plugin tool for the PrimerExplorer software which is freely available only for non-commercial and academic users at http://morphocatcher.ru.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Singh, Anil Kumar. « A Set of Annotation Interfaces for Alignment of Parallel Corpora ». Prague Bulletin of Mathematical Linguistics 102, no 1 (11 septembre 2014) : 57–68. http://dx.doi.org/10.2478/pralin-2014-0014.

Texte intégral
Résumé :
Abstract Annotation interfaces for parallel corpora which fit in well with other tools can be very useful. We describe a set of annotation interfaces which fulfill this criterion. This set includes a sentence alignment interface, two different word or word group alignment interfaces and an initial version of a parallel syntactic annotation alignment interface. These tools can be used for manual alignment, or they can be used to correct automatic alignments. Manual alignment can be performed in combination with certain kinds of linguistic annotation. Most of these interfaces use a representation called the Shakti Standard Format that has been found to be very robust and has been used for large and successful projects. It ties together the different interfaces, so that the data created by them is portable across all tools which support this representation. The existence of a query language for data stored in this representation makes it possible to build tools that allow easy search and modification of annotated parallel data.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Cheng, De Wen, Yong Tian Wang, M. M. Talha et Jun Chang. « Modeling and Tolerancing for Complex Aperture Imaging Systems ». Key Engineering Materials 364-366 (décembre 2007) : 1268–73. http://dx.doi.org/10.4028/www.scientific.net/kem.364-366.1268.

Texte intégral
Résumé :
To analyze the effects of fabrication and alignment errors on optical systems with complex apertures such as a segmented telescope, a visual basic routine (VBR) is developed using CODE V’s COM interface. One of the eminent features of this VBR is its graphical user interface (GUI). The GUI facilitates the user to model different types of segmentations quickly and precisely. The input parameters describe the basic shapes of the segments (polygon or sector), the chamfers at their corners, and the gaps between them. Fabrication errors (i.e. surface errors and alignment errors such as decenters and tilts for each segment) can also be introduced easily and effectively through the GUI. Geometrical and diffraction-based analyses can then be performed to study and analyze the effects of these errors on the imaging quality of the optical system. Results of tolerance analysis are presented for a test system, which require a very stringent tolerance to be placed on the discrepancy among the radii of curvature for different segments. In short, the VBR is a user friendly and a flexible tool with its application scope in designing complex aperture optical systems.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Pabo, Eric F., Christoph Floetgen, Bernhard Rebhan et Razek Nasser. « Advances in Aligned Wafer Bonding Enable by High Vacuum Processing ». Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2016, DPC (1 janvier 2016) : 000488–541. http://dx.doi.org/10.4071/2016dpc-ta33.

Texte intégral
Résumé :
High volume aligned wafer bonding processes typically separate the wafer to wafer alignment process from the wafer bonding process and this wafer to wafer alignment is normally done in an ambient atmosphere. While this process flow has worked well and enabled the proliferation of MEMS devices in the last decade, it does have limitations. The primary issue is the exposure to water vapor and ambient atmosphere which limits the preprocessing that can be done and maintained on the wafers to be bonded. Performing the wafer to wafer alignment, handling, and wafer bonding in a high vacuum environment allow specialized preprocessing of the wafers prior to alignment and bonding. The most basic preprocessing enabled by this high vacuum environment is the open face dehydration bake of wafers prior to alignment to alignment and bonding. When done in a cluster tool, a chamber can be dedicated to baking out the wafers to minimizing the effect of outgassing on the final vacuum level in the MEMS device. If one wafer needs a high temperature bakeout and getter activation and one wafer is limited to a low temperature bakeout this is possible by using two chamber in the cluster tool – one for the high temperature backout and one for the low temperature bakeout. Microbolometers that use vanadium oxide as the sensor layer are an example of a device needing high and low temperature bakeout. Another preprocessing enabled by the high vacuum cluster tool is a surface treatment which removes oxides from the surface, increases the surface energy, and enables the formation of covalent bonds at room temperature in the case of Si-Si bonding. This low temperature covalent bond has been shown to have an oxide free interface with a minimized amorphous layer as well as very low metal contamination. Also, because the bonding is done at or near room temperature it is possible to bond materials with substantially different CTES such a GaN to SiC This new technology will enable improved vacuum encapsulation as well as the manufacture of new, high performance engineered substrates. The latest process results as well as process flows and required equipment capabilities will be presented.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Agüero-Chapin, Guillermin, Deborah Galpert, Reinaldo Molina-Ruiz, Evys Ancede-Gallardo, Gisselle Pérez-Machado, Gustavo A. De la Riva et Agostinho Antunes. « Graph Theory-Based Sequence Descriptors as Remote Homology Predictors ». Biomolecules 10, no 1 (23 décembre 2019) : 26. http://dx.doi.org/10.3390/biom10010026.

Texte intégral
Résumé :
Alignment-free (AF) methodologies have increased in popularity in the last decades as alternative tools to alignment-based (AB) algorithms for performing comparative sequence analyses. They have been especially useful to detect remote homologs within the twilight zone of highly diverse gene/protein families and superfamilies. The most popular alignment-free methodologies, as well as their applications to classification problems, have been described in previous reviews. Despite a new set of graph theory-derived sequence/structural descriptors that have been gaining relevance in the detection of remote homology, they have been omitted as AF predictors when the topic is addressed. Here, we first go over the most popular AF approaches used for detecting homology signals within the twilight zone and then bring out the state-of-the-art tools encoding graph theory-derived sequence/structure descriptors and their success for identifying remote homologs. We also highlight the tendency of integrating AF features/measures with the AB ones, either into the same prediction model or by assembling the predictions from different algorithms using voting/weighting strategies, for improving the detection of remote signals. Lastly, we briefly discuss the efforts made to scale up AB and AF features/measures for the comparison of multiple genomes and proteomes. Alongside the achieved experiences in remote homology detection by both the most popular AF tools and other less known ones, we provide our own using the graphical–numerical methodologies, MARCH-INSIDE, TI2BioP, and ProtDCal. We also present a new Python-based tool (SeqDivA) with a friendly graphical user interface (GUI) for delimiting the twilight zone by using several similar criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Yang, Jianfeng, Xiaofan Ding, Xing Sun, Shui-Ying Tsang et Hong Xue. « SAMSVM : A tool for misalignment filtration of SAM-format sequences with support vector machine ». Journal of Bioinformatics and Computational Biology 13, no 06 (décembre 2015) : 1550025. http://dx.doi.org/10.1142/s0219720015500250.

Texte intégral
Résumé :
Sequence alignment/map (SAM) formatted sequences [Li H, Handsaker B, Wysoker A et al., Bioinformatics 25(16):2078–2079, 2009.] have taken on a main role in bioinformatics since the development of massive parallel sequencing. However, because misalignment of sequences poses a significant problem in analysis of sequencing data that could lead to false positives in variant calling, the exclusion of misaligned reads is a necessity in analysis. In this regard, the multiple features of SAM-formatted sequences can be treated as vectors in a multi-dimension space to allow the application of a support vector machine (SVM). Applying the LIBSVM tools developed by Chang and Lin [Chang C-C, Lin C-J, ACM Trans Intell Syst Technol 2:1–27, 2011.] as a simple interface for support vector classification, the SAMSVM package has been developed in this study to enable misalignment filtration of SAM-formatted sequences. Cross-validation between two simulated datasets processed with SAMSVM yielded accuracies that ranged from 0.89 to 0.97 with F-scores ranging from 0.77 to 0.94 in 14 groups characterized by different mutation rates from 0.001 to 0.1, indicating that the model built using SAMSVM was accurate in misalignment detection. Application of SAMSVM to actual sequencing data resulted in filtration of misaligned reads and correction of variant calling.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Mathema, Vivek Bhakta, Arjen M. Dondorp et Mallika Imwong. « OSTRFPD : Multifunctional Tool for Genome-Wide Short Tandem Repeat Analysis for DNA, Transcripts, and Amino Acid Sequences with Integrated Primer Designer ». Evolutionary Bioinformatics 15 (janvier 2019) : 117693431984313. http://dx.doi.org/10.1177/1176934319843130.

Texte intégral
Résumé :
Microsatellite mining is a common outcome of the in silico approach to genomic studies. The resulting short tandemly repeated DNA could be used as molecular markers for studying polymorphism, genotyping and forensics. The omni short tandem repeat finder and primer designer (OSTRFPD) is among the few versatile, platform-independent open-source tools written in Python that enables researchers to identify and analyse genome-wide short tandem repeats in both nucleic acids and protein sequences. OSTRFPD is designed to run either in a user-friendly fully featured graphical interface or in a command line interface mode for advanced users. OSTRFPD can detect both perfect and imperfect repeats of low complexity with customisable scores. Moreover, the software has built-in architecture to simultaneously filter selection of flanking regions in DNA and generate microsatellite-targeted primers implementing the Primer3 platform. The software has built-in motif-sequence generator engines and an additional option to use the dictionary mode for custom motif searches. The software generates search results including general statistics containing motif categorisation, repeat frequencies, densities, coverage, guanine–cytosine (GC) content, and simple text-based imperfect alignment visualisation. Thus, OSTRFPD presents users with a quick single-step solution package to assist development of microsatellite markers and categorise tandemly repeated amino acids in proteome databases. Practical implementation of OSTRFPD was demonstrated using publicly available whole-genome sequences of selected Plasmodium species. OSTRFPD is freely available and open-sourced for improvement and user-specific adaptation.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Alotaibi, Hind M. « AEPC : Designing an Arabic/English parallel corpus ». Research in Corpus Linguistics 4 (2016) : 1–7. http://dx.doi.org/10.32714/ricl.04.01.

Texte intégral
Résumé :
Parallel corpora ‒ collections of aligned translated texts of two or more languages ‒ play a significant role in translation and contrastive studies. Given the importance of the availability of such learning resources for the education and training of translators, Arabic suffers from a lack of such learning resources. Although there are a limited number of free Arabic/English parallel corpora, a major drawback is that they are domain-restricted corpora, which limits their benefits for Arabic translation education. This paper describes an ongoing project to design and construct a balanced, representative, and free-to-use Arabic English parallel corpus (AEPC). In addition, the project involves the design and implementation of an Arabic/English concordance tool. The proposed parallel corpus and its tool can be integrated into translators’ training institutions as an educational resource for translation studies and teaching. It can be used in training and testing Arabic/English machine translation systems. The first phase of this project involved compiling high-quality translated text samples; all translations were done by human translators. The corpus covers a wide range of text types and rich metadata. The target figure for the corpus is minimally 10 million words, with the intention to increase that figure in the future. After compiling the texts, manual (i.e. human-aided) alignment was performed, offering better outcomes in terms of accuracy compared to automated alignment. The second phase of this project involved designing a web interface with a bilingual concordancer, where users can explore the content of the AEPC in both English and Arabic.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Siah, Chun Fei, Lucas Yu Xiang Lum, Jianxiong Wang, Simon Chun Kiat Goh, Chong Wei Tan, Liangxing Hu, Philippe Coquet, Hong Li, Chuan Seng Tan et Beng Kang Tay. « Development of a CMOS-Compatible Carbon Nanotube Array Transfer Method ». Micromachines 12, no 1 (18 janvier 2021) : 95. http://dx.doi.org/10.3390/mi12010095.

Texte intégral
Résumé :
Carbon nanotubes (CNTs) have, over the years, been used in research as a promising material in electronics as a thermal interface material and as interconnects amongst other applications. However, there exist several issues preventing the widespread integration of CNTs onto device applications, e.g., high growth temperature and interfacial resistance. To overcome these issues, a complementary metal oxide semiconductor (CMOS)-compatible CNT array transfer method that electrically connects the CNT arrays to target device substrates was developed. The method separates the CNT growth and preparation steps from the target substrate. Utilizing an alignment tool with the capabilities of thermocompression enables a highly accurate transfer of CNT arrays onto designated areas with desired patterns. With this transfer process as a starting point, improvement pointers are also discussed in this paper to further improve the quality of the transferred CNTs.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Lewis, Ruthan. « A Method for Measuring the Effect of Grip Surface on Torque Production during Hand/Arm Rotation ». Proceedings of the Human Factors Society Annual Meeting 31, no 8 (septembre 1987) : 898–900. http://dx.doi.org/10.1177/154193128703100811.

Texte intégral
Résumé :
Control of a manually handled object may be dependent on a variety of factors. Among these are frictional properties and geometry of the surfaces in contact with each other, position and alignment of the object and the operator, strength of the operator, etc. Control of the object is pertinent to properly direct the object or tool and to minimize the effort required of the operator during its use (i.e. by coordinating the mechanical advantage of the object and the operator). Evaluation of this feature may then help to improve the design and intent of the object or tool. The particular interface of interest in this presentation is the type of surface to be gripped and rotated by the space-gloved hand during a simulation of an on-orbit construction technique. An isokinetic method has been used to examine the effect of surface-type on performance measures including torque production, position of the peak torque, and angular distance rotated. The methodology supported a realistic viewing and simulation of the actual technique, yet also allowed controlled experimentation of the scenario with usable results characterizing each surface-type. The technique may be varied according to the application, and will be described.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Petricˇ, Marko, Markus Frank, Frank Gaede et André Sailer. « New Developments in DD4hep ». EPJ Web of Conferences 214 (2019) : 02037. http://dx.doi.org/10.1051/epjconf/201921402037.

Texte intégral
Résumé :
For a successful experiment, it is of utmost importance to provide a consistent detector description. This is also the main motivation behind DD4hep, which addresses detector description in a broad sense including the geometry and the materials used in the device, and additional parameters describing, e.g., the detection techniques, constants required for alignment and calibration, description of the readout structures and conditions data. An integral part of DD4hep is DDG4 which is a powerful tool that converts arbitrary DD4hep detector geometries to Geant4 and provides access to all Geant4 action stages. It is equipped with a comprehensive plugins suite that includes handling of different IO formats; Monte Carlo truth linking and a large set of segmentation and sensitive detector classes, allowing the simulation of a wide variety of detector technologies. In the following, recent developments in DD4hep/DDG4 like the addition of a ROOT based persistency mechanism for the detector description and the development of framework support for DDG4 are highlighted. Through this mechanism an experiment’s data processing framework can interface its essential tools to all DDG4 actions. This allows for simple integration of DD4hep into existing experiment frameworks.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Wolkowski, Bailey, Elisabeth Snead, Michal Wesolowski, Jaswant Singh, Murray Pettitt, Rajni Chibbar, Seyedali Melli et James Montgomery. « Assessment of freeware programs for the reconstruction of tomography datasets obtained with a monochromatic synchrotron-based X-ray source ». Journal of Synchrotron Radiation 22, no 4 (24 juin 2015) : 1130–38. http://dx.doi.org/10.1107/s1600577515008437.

Texte intégral
Résumé :
Synchrotron-based in-line phase-contrast computed tomography (PC-CT) allows soft tissue to be imaged with sub-gross resolution and has potential to be used as a diagnostic tool. The reconstruction and processing of in-line PC-CT datasets is a computationally demanding task; thus, an efficient and user-friendly software program is desirable. Four freeware programs (NRecon,PITRE,H-PITREandAthabasca Recon) were compared for the availability of features such as dark- and flat-field calibration, beam power normalization, ring artifact removal, and alignment tools for optimizing image quality. An in-line PC-CT projection dataset (3751 projections, 180° rotation, 10.13 mm × 0.54 mm) was collected from a formalin-fixed canine prostate at the Biomedical Imaging and Therapy Bending Magnet (BMIT-BM) beamline of the Canadian Light Source. This dataset was processed with each of the four software programs and usability of the program was evaluated. Efficiency was assessed by how each program maximized computer processing power during computation.Athabasca Reconhad the least-efficient memory usage, least user-friendly interface, and lacked a ring artifact removal feature.NRecon,PITREandH-PITREproduced similar quality images, but theAthabasca Reconreconstruction suffered from the lack of a native ring remover algorithm. The 64-bit version ofNReconuses GPU (graphics processor unit) memory for accelerated processing and is user-friendly, but does not provide necessary parameters for in-line PC-CT data, such as dark-field and flat-field correction and beam power normalization.PITREhas many helpful features and tools, but lacks a comprehensive user manual and help section.H-PITREis a condensed version ofPITREand maximizes computer memory for efficiency. To conclude,NReconhas fewer imaging processing tools thanPITREandH-PITRE, but is ideal for less experienced users due to a simple user interface. Based on the quality of reconstructed images, efficient use of computer memory and parameter availability,H-PITREwas the preferred of the four programs compared.
Styles APA, Harvard, Vancouver, ISO, etc.
18

JONKERS, HENK, MARC LANKHORST, RENÉ VAN BUUREN, STIJN HOPPENBROUWERS, MARCELLO BONSANGUE et LEENDERT VAN DER TORRE. « CONCEPTS FOR MODELING ENTERPRISE ARCHITECTURES ». International Journal of Cooperative Information Systems 13, no 03 (septembre 2004) : 257–87. http://dx.doi.org/10.1142/s0218843004000985.

Texte intégral
Résumé :
A coherent description of enterprise architecture provides insight, enables communication among stakeholders and guides complicated change processes. Unfortunately, so far no enterprise architecture description language exists that fully enables integrated enterprise modeling, because for each architectural domain, architects use their own modeling techniques and concepts, tool support, visualization techniques, etc. In this paper, we outline such an integrated language and we identify and study concepts that relate architectural domains. In our language, concepts for describing the relationships between architecture descriptions at the business, application, and technology levels play a central role, related to the ubiquitous problem of business-ICT alignment, whereas for each architectural domain we conform to existing languages or standards such as UML. In particular, usage of services offered by one layer to another plays an important role in relating the behaviour aspects of the layers. The structural aspects of the layers are linked through the interface concept, and the information aspects through realization relations.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Gonnella, Giorgio, Niklas Niehus et Stefan Kurtz. « GfaViz : flexible and interactive visualization of GFA sequence graphs ». Bioinformatics 35, no 16 (31 décembre 2018) : 2853–55. http://dx.doi.org/10.1093/bioinformatics/bty1046.

Texte intégral
Résumé :
Abstract Summary The graphical fragment assembly (GFA) formats are emerging standard formats for the representation of sequence graphs. Although GFA 1 was primarily targeting assembly graphs, the newer GFA 2 format introduces several features, which makes it suitable for representing other kinds of information, such as scaffolding graphs, variation graphs, alignment graphs and colored metagenomic graphs. Here, we present GfaViz, an interactive graphical tool for the visualization of sequence graphs in GFA format. The software supports all new features of GFA 2 and introduces conventions for their visualization. The user can choose between two different layouts and multiple styles for representing single elements or groups. All customizations can be stored in custom tags of the GFA format itself, without requiring external configuration files. Stylesheets are supported for storing standard configuration options for groups of files. The visualizations can be exported to raster and vector graphics formats. A command line interface allows for batch generation of images. Availability and implementation GfaViz is available at https://github.com/ggonnella/gfaviz Supplementary information Supplementary data are available at Bioinformatics online.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Merlin, Bruno, Jorianne Thyeska Castro Alves, Pablo Henrique Caracciolo Gomes de Sá, Mônica Silva de Oliveira, Larissa Maranhão Dias, Gislenne da Silva Moia, Victória Cardoso dos Santos et Adonney Allan de Oliveira Veras. « CODON—Software to manual curation of prokaryotic genomes ». PLOS Computational Biology 17, no 3 (31 mars 2021) : e1008797. http://dx.doi.org/10.1371/journal.pcbi.1008797.

Texte intégral
Résumé :
Genome annotation conceptually consists of inferring and assigning biological information to gene products. Over the years, numerous pipelines and computational tools have been developed aiming to automate this task and assist researchers in gaining knowledge about target genes of study. However, even with these technological advances, manual annotation or manual curation is necessary, where the information attributed to the gene products is verified and enriched. Despite being called the gold standard process for depositing data in a biological database, the task of manual curation requires significant time and effort from researchers who sometimes have to parse through numerous products in various public databases. To assist with this problem, we present CODON, a tool for manual curation of genomic data, capable of performing the prediction and annotation process. This software makes use of a finite state machine in the prediction process and automatically annotates products based on information obtained from the Uniprot database. CODON is equipped with a simple and intuitive graphic interface that assists on manual curation, enabling the user to decide about the analysis based on information as to identity, length of the alignment, and name of the organism in which the product obtained a match. Further, visual analysis of all matches found in the database is possible, impacting significantly in the curation task considering that the user has at his disposal all the information available for a given product. An analysis performed on eleven organisms was used to test the efficiency of this tool by comparing the results of prediction and annotation through CODON to ones from the NCBI and RAST platforms.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Suchomel, Matthew, Gregory Halder et Lynn Ribaud. « Synchrotron powder diffraction simplified ; management of an APS mail-in program ». Acta Crystallographica Section A Foundations and Advances 70, a1 (5 août 2014) : C789. http://dx.doi.org/10.1107/s2053273314092109.

Texte intégral
Résumé :
Synchrotrons have revolutionized powder diffraction. They enable rapid collection of data with tremendous angular resolution and exceptional statistics. High resolution powder diffraction beamlines employing multiple single crystal analyzer detectors routinely reveal subtle crystallographic distortions undetectable on other powder instruments, and are an important tool at most modern synchrotrons for structural studies of a diverse range of materials. Beamline 11-BM at the Advanced Photon Source (APS) is a dedicated high resolution (ΔQ/Q ~2×10-4) powder diffraction instrument which uses vertical and horizontal beam focusing capabilities and a counting system consisting of twelve perfect crystal analyzers paired with scintillator detectors. This APS beamline supports both traditional on-site experiments and a highly successfully rapid access mail-in program mode. This mail-in program has greatly simplified access for a growing user community (> 250 in 2013) to world-class synchrotron quality powder data for their research and resulting publications (> 100 11-BM citations in 2013). The presentation will provide an overview of 11-BM's unique mail-in program. It will be presented both from the view of an external remote user, and will also highlight the numerous alignment, calibration, correction and merging software routines needed to efficiently and accurately reduce the numerous multi-bank detector datasets associated with a high throughput user program. An integrated web interface has been developed to serve as a user-friendly relational database interface for tracking of samples and datasets throughout all stages of the measurements; from the initial user request to sample disposal. The database and software tools critical for this high-throughput synchrotron powder diffraction program will be discussed in detail. More information about the 11-BM and its mail-in program can be found on the beamline webpage: http://11bm.xray.aps.anl.gov
Styles APA, Harvard, Vancouver, ISO, etc.
22

Demelo, Jonathan, et Kamran Sedig. « Forming Cognitive Maps of Ontologies Using Interactive Visualizations ». Multimodal Technologies and Interaction 5, no 1 (11 janvier 2021) : 2. http://dx.doi.org/10.3390/mti5010002.

Texte intégral
Résumé :
Ontology datasets, which encode the expert-defined complex objects mapping the entities, relations, and structures of a domain ontology, are increasingly being integrated into the performance of challenging knowledge-based tasks. Yet, it is hard to use ontology datasets within our tasks without first understanding the ontology which it describes. Using visual representation and interaction design, interactive visualization tools can help us learn and develop our understanding of unfamiliar ontologies. After a review of existing tools which visualize ontology datasets, we find that current design practices struggle to support learning tasks when attempting to build understanding of the ontological spaces within ontology datasets. During encounters with unfamiliar spaces, our cognitive processes align with the theoretical framework of cognitive map formation. Furthermore, designing encounters to promote cognitive map formation can improve our performance during learning tasks. In this paper, we examine related work on cognitive load, cognitive map formation, and the use of interactive visualizations during learning tasks. From these findings, we formalize a set of high-level design criteria for visualizing ontology datasets to promote cognitive map formation during learning tasks. We then perform a review of existing tools which visualize ontology datasets and assess their interface design towards their alignment with the cognitive map framework. We then present PRONTOVISE (PRogressive ONTOlogy VISualization Explorer), an interactive visualization tool which applies the high-level criteria within its design. We perform a task-based usage scenario to illustrate the design of PRONTOVISE. We conclude with a discussion of the implications of PRONTOVISE and its use of the criteria towards the design of interactive visualization tools which help us develop understanding of the ontological space within ontology datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Demelo, Jonathan, et Kamran Sedig. « Forming Cognitive Maps of Ontologies Using Interactive Visualizations ». Multimodal Technologies and Interaction 5, no 1 (11 janvier 2021) : 2. http://dx.doi.org/10.3390/mti5010002.

Texte intégral
Résumé :
Ontology datasets, which encode the expert-defined complex objects mapping the entities, relations, and structures of a domain ontology, are increasingly being integrated into the performance of challenging knowledge-based tasks. Yet, it is hard to use ontology datasets within our tasks without first understanding the ontology which it describes. Using visual representation and interaction design, interactive visualization tools can help us learn and develop our understanding of unfamiliar ontologies. After a review of existing tools which visualize ontology datasets, we find that current design practices struggle to support learning tasks when attempting to build understanding of the ontological spaces within ontology datasets. During encounters with unfamiliar spaces, our cognitive processes align with the theoretical framework of cognitive map formation. Furthermore, designing encounters to promote cognitive map formation can improve our performance during learning tasks. In this paper, we examine related work on cognitive load, cognitive map formation, and the use of interactive visualizations during learning tasks. From these findings, we formalize a set of high-level design criteria for visualizing ontology datasets to promote cognitive map formation during learning tasks. We then perform a review of existing tools which visualize ontology datasets and assess their interface design towards their alignment with the cognitive map framework. We then present PRONTOVISE (PRogressive ONTOlogy VISualization Explorer), an interactive visualization tool which applies the high-level criteria within its design. We perform a task-based usage scenario to illustrate the design of PRONTOVISE. We conclude with a discussion of the implications of PRONTOVISE and its use of the criteria towards the design of interactive visualization tools which help us develop understanding of the ontological space within ontology datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Zerihun, Mehari B., Fabrizio Pucci, Emanuel K. Peter et Alexander Schug. « pydca v1.0 : a comprehensive software for direct coupling analysis of RNA and protein sequences ». Bioinformatics 36, no 7 (28 novembre 2019) : 2264–65. http://dx.doi.org/10.1093/bioinformatics/btz892.

Texte intégral
Résumé :
Abstract Motivation The ongoing advances in sequencing technologies have provided a massive increase in the availability of sequence data. This made it possible to study the patterns of correlated substitution between residues in families of homologous proteins or RNAs and to retrieve structural and stability information. Direct coupling analysis (DCA) infers coevolutionary couplings between pairs of residues indicating their spatial proximity, making such information a valuable input for subsequent structure prediction. Results Here, we present pydca, a standalone Python-based software package for the DCA of protein- and RNA-homologous families. It is based on two popular inverse statistical approaches, namely, the mean-field and the pseudo-likelihood maximization and is equipped with a series of functionalities that range from multiple sequence alignment trimming to contact map visualization. Thanks to its efficient implementation, features and user-friendly command line interface, pydca is a modular and easy-to-use tool that can be used by researchers with a wide range of backgrounds. Availability and implementation pydca can be obtained from https://github.com/KIT-MBS/pydca or from the Python Package Index under the MIT License. Supplementary information Supplementary data are available at Bioinformatics online.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Rivera, Gibran, et Andrew M. Cox. « An actor-network theory perspective to study the non-adoption of a collaborative technology intended to support online community participation ». Academia Revista Latinoamericana de Administración 29, no 3 (1 août 2016) : 347–65. http://dx.doi.org/10.1108/arla-02-2015-0039.

Texte intégral
Résumé :
Purpose The purpose of this paper is to explore the value of Actor-network theory as an approach to explain the non-adoption of collaborative technology. Design/Methodology/Approach The notion of translation and related concepts pertaining to Actor-network theory are used to explore the case of non-participation in an organizational online community. Semi-structured interviews were conducted with 30 HR professionals belonging to a multi-campus university system in Mexico. Findings The study shows that participation in the online community did not occur as expected by those promoting its use. An initial inductive analysis showed that the factors that undermine participation had to do with the interface design of the technology and the individual motivations and benefits derived from participation. A second analysis, using ANT showed how processes of negotiation, conflict, enrolment, alignment, and betrayal that occurred during the emergence and evolution of the new network played a critical role in technology adoption leading to the dissolution of the initiative to adopt the collaborative technology. Originality/value The study shows the value of ANT as a tool to better understand the adoption and use of collaborative technology. The analysis goes beyond existing explanations of participation, which tend to focus attention on matters such as the interface design or the personal motivations and benefits derived from participation. It does so by moving away from solely looking at what occurs within the boundaries of a community and understanding the context within which it is being introduced. It prompts the analysis of moments of problematization, interessement, enrolment, and mobilization to explore the adoption process, including the role of non-human actors.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Lin, Yu-Shiang, Chun-Yuan Lin, Hsiao-Chieh Chi et Yeh-Ching Chung. « Multiple Sequence Alignments with Regular Expression Constraints on a Cloud Service System ». International Journal of Grid and High Performance Computing 5, no 3 (juillet 2013) : 55–64. http://dx.doi.org/10.4018/jghpc.2013070105.

Texte intégral
Résumé :
Multiple sequence alignments with constraints are of priority concern in computational biology. Constrained sequence alignment incorporates the domain knowledge of biologists into sequence alignments such that the user-specified residues/segments are aligned together according to the alignment results. A series of constrained multiple sequence alignment tools have been developed in relevant literatures in the recent decade. GPU-REMuSiC is the most advanced method with the regular expression constraints, in which graphics processing units (GPUs) with CUDA are used. GPU-REMuSiC can achieve a speedup ratio of 29x for overall computation time based on the experimental results. However, the execution environment of GPU-REMuSiC must be constructed; it is a threshold for biologists to set up it. Therefore, this work presents an intuitive friendly user interface of GPU-REMuSiC for the potential cloud server with GPUs, called Cloud GPU-REMuSiC. Implementing the user interface via a network allows us to transmit the input data to a remote server without a complex cumbersome setting in a local host. Finally, the alignment results can be obtained from a remote cloud server with GPUs. Cloud GPU-REMuSiC is highly promising as an online application that is accessible without time or location constraints.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Kim, Won S. « Virtual Reality Calibration and Preview/Predictive Displays for Telerobotics ». Presence : Teleoperators and Virtual Environments 5, no 2 (janvier 1996) : 173–90. http://dx.doi.org/10.1162/pres.1996.5.2.173.

Texte intégral
Résumé :
A virtual reality (VR) calibration technique of matching a virtual environment of simulated three-dimensional (3-D) graphic models with actual camera views of the remote site task environment has been developed. This VR calibration enables high-fidelity preview/predictive displays with calibrated graphics overlay on live video. Reliable and accurate calibration is achieved by operator-interactive camera calibration and object localization procedures based on new linear/nonlinear least-squares algorithms that can handle multiple-camera views. Since the object pose becomes known through the VR calibration, the operator can now effectively use the semiautomatic computer-generated trajectory mode in addition to the manual teleoperation mode. The developed VR calibration technique and the resultant high fidelity preview/predictive displays were successfully utilized in a recent JPL/NASA-GSFC (Jet Propulsion Laboratory/Goddard Space Flight Center) telerobotic servicing demonstration. Preview/predictive displays were very useful for both noncontact and contact tasks, providing an effective VR interface with immediate visual prediction/verification to the operator. The positioning alignment accuracy achieved using four-camera views in inserting a tool into the ORU hole was 0.51 cm on the average with a 1.07 cm maximum error at 95% confidence level. Results also indicate that the object localization with two well-chosen, e.g., near orthogonal camera views, could be nearly as accurate as that with four-camera views.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Ünlü, Hilmi. « A thermoelastic model for strain effects on bandgaps and band offsets in heterostructure core/shell quantum dots ». European Physical Journal Applied Physics 86, no 3 (juin 2019) : 30401. http://dx.doi.org/10.1051/epjap/2019180350.

Texte intégral
Résumé :
A thermoelastic model is proposed to determine elastic strain effects on electronic properties of spherical Type I and Type II heterostructure core/shell quantum dots (QDs) as a function of dimensions of constituent semiconductors at any temperature. Proposed model takes into account the difference between lattice constants, linear expansion coefficients and anisotropy of elastic moduli (Young's modulus and Poisson's ratio) of constituent semiconductors, respectively. In analogous to lattice mismatch, we introduce so called the elastic anisotropy mismatch in heterostructures. Compressive strain acting on core (shell) side of heterointerfaces in CdSe/CdS, CdSe/ZnS, and ZnSe/ZnS QDs increases (decreases) as shell diameter is increased, which causes increase (decrease) in core bandgap as sell (core) diameter is increased in these nanostructures. Furthermore, there is a parabolic increase in conduction band offsets and core bandgaps in CdSe/CdS, CdSe/ZnS, and ZnSe/ZnS QDs and decrease in conduction band offset and core bandgap of ZnSe/CdS QD as core (shell) diameter increases for fixed shell (core) diameter. Comparison shows that using isotropic elastic moduli in determining band offsets and core band gaps gives better agreement with experiment than anisotropic elastic moduli for core bandgaps of CdSe/CdS, CdSe/ZnS, ZnSe/ZnS, and ZnSe/CdS core/shell QDs. Furthermore, we also show that the strain-modified two band effective mass approximation can be used to determine band offsets by using measured core band gaps in core/shell heterostructure QDs with Type II interface band alignment. Excellent agreement between predicted and measured core bandgaps in CdSe and ZnSe based core/shell QDs suggests that proposed model can be a good design tool for process simulation of core/shell heterostructure QDs.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Ramadan, Tarek. « SYSTEM-LEVEL, POST-LAYOUT ELECTRICAL ANALYSIS FOR HIGH-DENSITY ADVANCED PACKAGING ». Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2019, DPC (1 janvier 2019) : 000856–77. http://dx.doi.org/10.4071/2380-4491-2019-dpc-presentation_wp1_015.

Texte intégral
Résumé :
INTRODUCTION High-density advanced packaging (HDAP) continues to be the promising “More” in the “More than Moore” approach for improved form factor, functionality, and integration of multiple dies built using different technology nodes. HDAP offerings from outsourced assembly and test (OSAT) companies and foundries are continuously increasing. However, the full commercial productization of such offerings will require the assurance of both an acceptable yield and correct (as intended) functionality. This assurance, like that for integrated circuits (ICs), will come from the availability of proven and qualified electronic design automation (EDA) tools and flows that can be used by the design houses to build HDAPs with the confidence that they are compliant with the foundry/OSAT requirements and recommendations. The need for and general concept of assembly design kits (ADKs) that provide proven, qualified flows for HDAPs has been previously discussed in multiple white papers. In addition, there have been analyses of the need for assembly-level layout vs. schematic (LVS) verification for HDAPs. Best practices for an assembly-level LVS process have been proposed, including the required inputs (data, formats, etc.), and likely hurdles and potential errors have been highlighted. There has even been discussion of how parasitic extraction could be achieved for packages. However, as HDAP technologies and flows mature, system-level designers want to know if package design rule checking (DRC), assembly-level LVS, and layout vs. layout (LVL) verification (die-to-package alignment, scaling, orientation, etc.) are sufficient to guarantee correct functionality and successful manufacturing of the HDAP. While this question may depend on how complicated the HDAP is, in general, the answer (for now) is no. As HDAP technologies become more and more similar to IC technologies, it is clear that, although the physical verification steps for HDAP may be considered good progress, they are only part of a much more comprehensive flow, one that must account for a more in-depth, system-level electrical analysis. Of course, at the same time, expanded EDA tool support is required to ensure fast, accurate, automated flows that ensure package designers can meet their market schedules and expectations. HDAP POST-LAYOUT ELECTRICAL ANALYSIS In the case of an HDAP design, the foundry/OSAT expects that each component is designed and validated to meet the required HDAP constraints and specifications. For an analog-based flow, the designer must simulate the HDAP system circuitry, including parasitics, to ensure it meets the intended performance specifications. For a digital-based flow, the designer must run static timing analysis (STA) on the complete HDAP system, including parasitics, to ensure it meets the overall system timing budget. From an EDA perspective, building an automated flow to support these checks/analyses provides assurance that these processes can occur in a consistent, repeatable manner while ensuring accuracy and minimizing runtime. In general, EDA approaches take one of two paths. SINGLE COCKPIT In the cockpit approach, an EDA supplier builds a single simulator infrastructure to support HDAP circuit simulation, parasitic extraction (PEX), and static timing analysis (STA). Although a single interface seems convenient, it forces the designer to use the same design tool for all components at all levels (die and package). This approach may be too restrictive, given that HDAP design and verification typically require the involvement of multiple groups with varying backgrounds and tool preferences. Although this approach would be useful when building “fully live” heterogeneous HDAPs (i.e., both die and package are under development simultaneously, and can both be edited for performance), this is rarely the case. More commonly, known good dies (which have already been taped out) are used to build an HDAP. TOOL-AGNOSTIC In the tool-agnostic approach, an EDA supplier enables the user to construct the needed system-level connectivity of the HDAP (including parasitics), regardless of which design tools are used to build any one die or the package. Once the system-level connectivity is available, it can be exported in the required format to any circuit simulation/STA tool to simulate or analyze the entire HDAP system. This approach introduces minimum disruption to existing tools/methodologies used for die and package design. This paper discusses the implementation of a system-level parasitic netlist process for the HDAP using the tool-agnostic approach.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Capuz, Giovanni, Melina Lofrano, Carine Gerets, Fabrice Duval, Pieter Bex, Jaber Derakhshandeh, Kris Vanstreels, Alain Phommahaxay, Eric Beyne et Andy Miller. « A Novel Method for Characterization of Ultralow Viscosity NCF Layers Using TCB for 3D Assembly ». Journal of Microelectronics and Electronic Packaging 18, no 1 (1 janvier 2021) : 12–20. http://dx.doi.org/10.4071/imaps.1391366.

Texte intégral
Résumé :
Abstract For die-to-wafer (D2W) stacking of high-density interconnects and fine-pitch microbumps, underfill serves to fill the spaces in-between microbumps for protection and reliability. Among the different types of underfill, nonconductive film (NCF) has the advantages of fillet and volume control. However, one of the challenges is the solder joint wetting. An NCF must have good embedded-flux activation to mitigate Cu UBM pad oxidation due to the repeated TCB cycles that accelerate oxidation on neighboring dice. The flux in the NCF also helps in wetting the solder bumps. To realize efficient solder wetting, one must also understand the NCF deformation quality, which is a function of its viscosity. This parameter has direct impact on the deformation of solder bumps. High-viscosity NCF would be difficult to deform, thus preventing solder contact to pad during TCB reflow temperature. High bond force is required and could lead to reduced alignment accuracy. For a low viscous NCF, it requires low bond force. Solder joint wetting is a challenge with excessive squeezeout due to fast and instantaneous deformation. We seek to demonstrate in this article a creative methodology for NCF material characterization, considering the factors of NCF viscosity, deformation, and solder squeezeout. We use TCB tool position-tracking data to define the deformation curve of the NCF as a function of temperature and time at very fast profile of TCB. We use the NCF viscosity curve as reference in relation to the actual deformation, and predict dynamic deformation in three different configurations. Deformation test configurations were performed on chips with and without microbumps bonded with a rigid flat glass surface and with a bottom Cu UBM pad. The experiments were performed with different heating ramp rates at target above Sn reflow of ~250°C interface temperature. As validation, we applied the optimized TCB process (force, temperature, and ramp rate) on a test vehicle with 20 and 40 μm pitch daisy chains and obtained very good connectivity with good joint and IMC formation.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Sobhy, Haitham, et Philippe Colson. « Gemi : PCR Primers Prediction from Multiple Alignments ». Comparative and Functional Genomics 2012 (2012) : 1–5. http://dx.doi.org/10.1155/2012/783138.

Texte intégral
Résumé :
Designing primers and probes for polymerase chain reaction (PCR) is a preliminary and critical step that requires the identification of highly conserved regions in a given set of sequences. This task can be challenging if the targeted sequences display a high level of diversity, as frequently encountered in microbiologic studies. We developed Gemi, an automated, fast, and easy-to-use bioinformatics tool with a user-friendly interface to design primers and probes based on multiple aligned sequences. This tool can be used for the purpose of real-time and conventional PCR and can deal efficiently with large sets of sequences of a large size.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Avramidis, Eleftherios. « Qualitative : Python Tool for MT Quality Estimation Supporting Server Mode and Hybrid MT ». Prague Bulletin of Mathematical Linguistics 106, no 1 (1 octobre 2016) : 147–58. http://dx.doi.org/10.1515/pralin-2016-0014.

Texte intégral
Résumé :
Abstract We are presenting the development contributions of the last two years to our Python opensource Quality Estimation tool, a tool that can function in both experiment-mode and online web-service mode. The latest version provides a new MT interface, which communicates with SMT and rule-based translation engines and supports on-the-fly sentence selection. Additionally, we present an improved Machine Learning interface allowing more efficient communication with several state-of-the-art toolkits. Additions also include a more informative training process, a Python re-implementation of QuEst baseline features, a new LM toolkit integration, an additional PCFG parser and alignments of syntactic nodes.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Ankenbrand, Markus J., Sonja Hohlfeld, Thomas Hackl et Frank Förster. « AliTV—interactive visualization of whole genome comparisons ». PeerJ Computer Science 3 (12 juin 2017) : e116. http://dx.doi.org/10.7717/peerj-cs.116.

Texte intégral
Résumé :
Whole genome alignments and comparative analysis are key methods in the quest of unraveling the dynamics of genome evolution. Interactive visualization and exploration of the generated alignments, annotations, and phylogenetic data are important steps in the interpretation of the initial results. Limitations of existing software inspired us to develop our new tool AliTV, which provides interactive visualization of whole genome alignments. AliTV reads multiple whole genome alignments or automatically generates alignments from the provided data. Optional feature annotations and phylo- genetic information are supported. The user-friendly, web-browser based and highly customizable interface allows rapid exploration and manipulation of the visualized data as well as the export of publication-ready high-quality figures. AliTV is freely available at https://github.com/AliTVTeam/AliTV.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kwon, Tae Ho, Sang I. Park, Young-Hoon Jang et Sang-Ho Lee. « Design of Railway Track Model with Three-Dimensional Alignment Based on Extended Industry Foundation Classes ». Applied Sciences 10, no 10 (25 mai 2020) : 3649. http://dx.doi.org/10.3390/app10103649.

Texte intégral
Résumé :
Building information modeling (BIM) has been widely applied in conjunction with the industry foundation class (IFC) for buildings and infrastructure such as railways. However, a limitation of the BIM technology presents limitations that make designing the three-dimensional (3D) alignment-based information models difficult. Thus, the time and effort required to create a railway track model are increased, while the reliability of the model is reduced. In this study, we propose a methodology for developing an alignment-based independent railway track model and extended IFC models containing railway alignment information. The developed algorithm using BIM software tools allows for a discontinuous structure to be designed. The 3D alignment information connects different BIM software tools, and the classification system and IFC schema for expressing railway tracks are extended. Moreover, the classification system is fundamental for assigning IFC entities to railway components. Spatial and hierarchical entities were created through a developed user interface. The proposed methodology was implemented in an actual railway track test. The possibility of managing IFC-based railway track information, including its 3D alignment information, was confirmed. The proposed methodology can reduce the modeling time and can be extended to other alignment-based structures, such as roads.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Capuz, Giovanni, Melina Lofrano, Carine Gerets, Fabrice Duval, Pieter Bex, Jaber Derakhshandeh, Kris Vanstreels, Alain Phommahaxay, Eric Beyne et Andy Miller. « A novel method for characterization of Ultra Low Viscosity NCF layers using TCB for 3D Assembly ». International Symposium on Microelectronics 2020, no 1 (1 septembre 2020) : 000185–91. http://dx.doi.org/10.4071/2380-4505-2020.1.000185.

Texte intégral
Résumé :
Abstract For die to wafer bonding of high-density interconnects and fine pitch microbumps developing and characterizing suitable underfill materials are required. In general, underfill serve to fill the spaces in-between microbumps for protection and reliability. Among the different types of underfill, Non-Conductive Film (NCF) has the advantages of fillet and volume control, and a built-in flux to aid wetting. However, challenges arise for thin dies and microbumps with fine pitches on film lamination, voiding, transparency, filler percentage, dicing compatibility and most importantly, deformation behavior and possibility to improve solder joint wetting. In a Die-to-Wafer D2W stacking with a Sn solder bump interconnect to Cu UBM, concern is high on the Cu pad oxidation due to the repeated TCB cycles that accelerate oxidation on neighboring dies. Process mitigation is needed to help reducing the oxidation. But even so, an NCF must have good embedded flux activation. Another main factor for an NCF to have efficient TCB process with good solder joint wetting, is the NCF deformation quality in which is a function of its viscosity. This parameter has direct impact on the deformation of solder bumps. High viscosity NCF would be difficult to deform, thus preventing solder contact to pad during TCB reflow temperature. High bond force is required and could lead to reduced alignment accuracy. Filler entrapment is also a subsequent concern for high filler loading, high viscosity NCF. For a low viscous NCF, careful attention in process characterization is needed in TCB with low bond force. Solder joint wetting is a problem with excessive squeeze-out due to fast and instantaneous deformation. With low viscosity, not only the bond force applied should be low, but the deformation behavior should also be understood to enable an effective NCF. We seek to demonstrate in this paper a creative methodology for Non-Conductive Film (NCF) material characterization, considering the factors of NCF viscosity, deformation, and solder squeeze-out. Characterizing NCF viscosity at fast TCB profiles is challenging considering deformation behavior of both the NCF itself and the solder bumps that shaped the solder squeeze-out and wetting. Furthermore, in this paper we use TCB tool position tracking to define the deformation curve of NCF film as a function of temperature and time at very fast profile of TCB. We use material viscosity curve as reference in relation to the actual deformation, and predict dynamic deformation based on Reynold’s equation within TCB profile duration. The experiments were performed with different heating ramp rates at target above Sn reflow of ~250C interface temperature. The deformation analysis is not limited to thin film sandwiched between parallel plates. Deformation test was performed on chips with and without microbumps and with rigid flat glass surface and its combinations. Deformation of underfill is recorded in the readout of TCB tool. As validation, we applied the optimized TCB process (force, temperature, and ramp rate) on a test vehicle with 20 and 40um pitch daisy chains and obtained close to 95% electrical yield with good joint and IMC formation. The cross-section SEM images show good wetting, revealing good activation of built-in flux when the optimized TCB profile was used.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Thompson, J. « The CLUSTAL_X windows interface : flexible strategies for multiple sequence alignment aided by quality analysis tools ». Nucleic Acids Research 25, no 24 (15 décembre 1997) : 4876–82. http://dx.doi.org/10.1093/nar/25.24.4876.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Weikle, Robert M., H. Li, A. Arsenovic, S. Nadri, L. Xie, M. F. Bauwens, N. Alijabbari, N. Scott Barker et A. W. Lichtenberger. « Micromachined Interfaces for Metrology and Packaging Applications in the Submillimeter-Wave Band ». Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2017, DPC (1 janvier 2017) : 1–36. http://dx.doi.org/10.4071/2017dpc-tha3_presentation2.

Texte intégral
Résumé :
The continued emergence of new terahertz devices has created a need for improved approaches to packaging, integration, and measurement tools for diagnostics and characterization in this portion of the spectrum. Rectangular waveguide has for many years been the primary transmission medium for terahertz and submillimeter-wave systems operating from 300 GHz to 1 THz, with the UG-387 flange the most common interface for mating waveguide components over this frequency range. Alignment of UG-387 flanges is accomplished with pins and alignment holes that are placed around the flange perimeter and, under the standard MIL SPECS tolerances, misalignments of up to 6 mils (150 microns) are possible as a result of practical milling tolerances. With the emergence of vector network analyzers operating beyond 1 THz, such misalignment of waveguide mating flanges is not negligible and is recognized as a fundamental issue limiting calibration and measurement precision at frequencies greater than 300 GHz. In response to this issue, a number of new waveguide flange concepts have been investigated to reduce flange misalignment and the P1785 IEEE Standard was recently issued to recommend designs for waveguide interfaces at frequencies above 110 GHz. Among the new flange concepts being proposed is a modified UG-387 that utilizes tighter machining tolerances and the ring-centered flange where alignment is accomplished using a precision coupling ring that fits over raised bosses that are centered on each waveguide. This paper discusses the new interface concepts that are being developed to address waveguide flange misalignment as well as emerging micromachined interconnects, calibration standards and heterogeneous integration methods that are being applied to implement low-loss and high-performance circuit architectures for the terahertz frequency range. Among the technologies that will be described are (1) design and characterization methods for the new ring-centered waveguide standard, (2) micromachined waveguide components and calibration standards for the terahertz band, (3) silicon-based micromachined probe structures for direct-contact interfacing and metrology, and (4) epitaxial transfer of III-V semiconductor material onto high-resistivity silicon to realize a low-loss platform for integration of terahertz components. Details of the processing methods used to realize these components as well as measurement techniques for assessing their performance will be described.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Katoh, Kazutaka, John Rozewicki et Kazunori D. Yamada. « MAFFT online service : multiple sequence alignment, interactive sequence choice and visualization ». Briefings in Bioinformatics 20, no 4 (6 septembre 2017) : 1160–66. http://dx.doi.org/10.1093/bib/bbx108.

Texte intégral
Résumé :
Abstract This article describes several features in the MAFFT online service for multiple sequence alignment (MSA). As a result of recent advances in sequencing technologies, huge numbers of biological sequences are available and the need for MSAs with large numbers of sequences is increasing. To extract biologically relevant information from such data, sophistication of algorithms is necessary but not sufficient. Intuitive and interactive tools for experimental biologists to semiautomatically handle large data are becoming important. We are working on development of MAFFT toward these two directions. Here, we explain (i) the Web interface for recently developed options for large data and (ii) interactive usage to refine sequence data sets and MSAs.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Tamura, Koichiro, Glen Stecher et Sudhir Kumar. « MEGA11 : Molecular Evolutionary Genetics Analysis Version 11 ». Molecular Biology and Evolution 38, no 7 (23 avril 2021) : 3022–27. http://dx.doi.org/10.1093/molbev/msab120.

Texte intégral
Résumé :
Abstract The Molecular Evolutionary Genetics Analysis (MEGA) software has matured to contain a large collection of methods and tools of computational molecular evolution. Here, we describe new additions that make MEGA a more comprehensive tool for building timetrees of species, pathogens, and gene families using rapid relaxed-clock methods. Methods for estimating divergence times and confidence intervals are implemented to use probability densities for calibration constraints for node-dating and sequence sampling dates for tip-dating analyses. They are supported by new options for tagging sequences with spatiotemporal sampling information, an expanded interactive Node Calibrations Editor, and an extended Tree Explorer to display timetrees. Also added is a Bayesian method for estimating neutral evolutionary probabilities of alleles in a species using multispecies sequence alignments and a machine learning method to test for the autocorrelation of evolutionary rates in phylogenies. The computer memory requirements for the maximum likelihood analysis are reduced significantly through reprogramming, and the graphical user interface has been made more responsive and interactive for very big data sets. These enhancements will improve the user experience, quality of results, and the pace of biological discovery. Natively compiled graphical user interface and command-line versions of MEGA11 are available for Microsoft Windows, Linux, and macOS from www.megasoftware.net.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Passaro, Marco, Martina Martinovic, Valeria Bevilacqua, Elliot A. Hershberg, Grazisa Rossetti, Brian J. Beliveau, Raoul J. P. Bonnal et Massimiliano Pagani. « OligoMinerApp : a web-server application for the design of genome-scale oligonucleotide in situ hybridization probes through the flexible OligoMiner environment ». Nucleic Acids Research 48, W1 (20 avril 2020) : W332—W339. http://dx.doi.org/10.1093/nar/gkaa251.

Texte intégral
Résumé :
Abstract Fluorescence in situ hybridization (FISH) is a powerful single-cell technique that harnesses nucleic acid base pairing to detect the abundance and positioning of cellular RNA and DNA molecules in fixed samples. Recent technology development has paved the way to the construction of FISH probes entirely from synthetic oligonucleotides (oligos), allowing the optimization of thermodynamic properties together with the opportunity to design probes against any sequenced genome. However, comparatively little progress has been made in the development of computational tools to facilitate the oligos design, and even less has been done to extend their accessibility. OligoMiner is an open-source and modular pipeline written in Python that introduces a novel method of assessing probe specificity that employs supervised machine learning to predict probe binding specificity from genome-scale sequence alignment information. However, its use is restricted to only those people who are confident with command line interfaces because it lacks a Graphical User Interface (GUI), potentially cutting out many researchers from this technology. Here, we present OligoMinerApp (http://oligominerapp.org), a web-based application that aims to extend the OligoMiner framework through the implementation of a smart and easy-to-use GUI and the introduction of new functionalities specially designed to make effective probe mining available to everyone.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Acton, C., N. Bachman, B. Semenov et E. Wright. « SPICE TOOLS SUPPORTING PLANETARY REMOTE SENSING ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (13 juin 2016) : 357–59. http://dx.doi.org/10.5194/isprs-archives-xli-b4-357-2016.

Texte intégral
Résumé :
NASA's "SPICE"<sup>*</sup> ancillary information system has gradually become the de facto international standard for providing scientists the fundamental observation geometry needed to perform photogrammetry, map making and other kinds of planetary science data analysis. SPICE provides position and orientation ephemerides of both the robotic spacecraft and the target body; target body size and shape data; instrument mounting alignment and field-of-view geometry; reference frame specifications; and underlying time system conversions. <br><br> SPICE comprises not only data, but also a large suite of software, known as the SPICE Toolkit, used to access those data and subsequently compute derived quantities–items such as instrument viewing latitude/longitude, lighting angles, altitude, etc. <br><br> In existence since the days of the Magellan mission to Venus, the SPICE system has continuously grown to better meet the needs of scientists and engineers. For example, originally the SPICE Toolkit was offered only in Fortran 77, but is now available in C, IDL, MATLAB, and Java Native Interface. SPICE calculations were originally available only using APIs (subroutines), but can now be executed using a client-server interface to a geometry engine. Originally SPICE "products" were only available in numeric form, but now SPICE data visualization is also available. <br><br> The SPICE components are free of cost, license and export restrictions. Substantial tutorials and programming lessons help new users learn to employ SPICE calculations in their own programs. The SPICE system is implemented and maintained by the Navigation and Ancillary Information Facility (NAIF)–a component of NASA's Planetary Data System (PDS). <br><br> <sup>*</sup> Spacecraft, Planet, Instrument, Camera-matrix, Events
Styles APA, Harvard, Vancouver, ISO, etc.
42

Acton, C., N. Bachman, B. Semenov et E. Wright. « SPICE TOOLS SUPPORTING PLANETARY REMOTE SENSING ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (13 juin 2016) : 357–59. http://dx.doi.org/10.5194/isprsarchives-xli-b4-357-2016.

Texte intégral
Résumé :
NASA's "SPICE"&lt;sup&gt;*&lt;/sup&gt; ancillary information system has gradually become the de facto international standard for providing scientists the fundamental observation geometry needed to perform photogrammetry, map making and other kinds of planetary science data analysis. SPICE provides position and orientation ephemerides of both the robotic spacecraft and the target body; target body size and shape data; instrument mounting alignment and field-of-view geometry; reference frame specifications; and underlying time system conversions. &lt;br&gt;&lt;br&gt; SPICE comprises not only data, but also a large suite of software, known as the SPICE Toolkit, used to access those data and subsequently compute derived quantities–items such as instrument viewing latitude/longitude, lighting angles, altitude, etc. &lt;br&gt;&lt;br&gt; In existence since the days of the Magellan mission to Venus, the SPICE system has continuously grown to better meet the needs of scientists and engineers. For example, originally the SPICE Toolkit was offered only in Fortran 77, but is now available in C, IDL, MATLAB, and Java Native Interface. SPICE calculations were originally available only using APIs (subroutines), but can now be executed using a client-server interface to a geometry engine. Originally SPICE "products" were only available in numeric form, but now SPICE data visualization is also available. &lt;br&gt;&lt;br&gt; The SPICE components are free of cost, license and export restrictions. Substantial tutorials and programming lessons help new users learn to employ SPICE calculations in their own programs. The SPICE system is implemented and maintained by the Navigation and Ancillary Information Facility (NAIF)–a component of NASA's Planetary Data System (PDS). &lt;br&gt;&lt;br&gt; &lt;sup&gt;*&lt;/sup&gt; Spacecraft, Planet, Instrument, Camera-matrix, Events
Styles APA, Harvard, Vancouver, ISO, etc.
43

Malhis, Nawar, Matthew Jacobson, Steven J. M. Jones et Jörg Gsponer. « LIST-S2 : taxonomy based sorting of deleterious missense mutations across species ». Nucleic Acids Research 48, W1 (30 avril 2020) : W154—W161. http://dx.doi.org/10.1093/nar/gkaa288.

Texte intégral
Résumé :
Abstract The separation of deleterious from benign mutations remains a key challenge in the interpretation of genomic data. Computational methods used to sort mutations based on their potential deleteriousness rely largely on conservation measures derived from sequence alignments. Here, we introduce LIST-S2, a successor to our previously developed approach LIST, which aims to exploit local sequence identity and taxonomy distances in quantifying the conservation of human protein sequences. Unlike its predecessor, LIST-S2 is not limited to human sequences but can assess conservation and make predictions for sequences from any organism. Moreover, we provide a web-tool and downloadable software to compute and visualize the deleteriousness of mutations in user-provided sequences. This web-tool contains an HTML interface and a RESTful API to submit and manage sequences as well as a browsable set of precomputed predictions for a large number of UniProtKB protein sequences of common taxa. LIST-S2 is available at: https://list-s2.msl.ubc.ca/
Styles APA, Harvard, Vancouver, ISO, etc.
44

Vensko, Steven, Benjamin Vincent et Dante Bortone. « 485 RAFT : A framework to support rapid and reproducible immuno-oncology analyses ». Journal for ImmunoTherapy of Cancer 8, Suppl 3 (novembre 2020) : A521. http://dx.doi.org/10.1136/jitc-2020-sitc2020.0485.

Texte intégral
Résumé :
BackgroundAnalysis reproducibility and transparency are pillars of robust and trustworthy scientific results. The dependability of these results is crucial in clinical settings where they may guide high-impact decisions affecting patient health. Independent reproduction of computational results has been problematic and can be a burden on the individuals attempting to reproduce the results. Reproduction complications may arise from: 1) insufficiently described parameters, 2) vague methods, or 3) secret scripts required to generate final outputs, among others. Here we introduce RAFT (Reproducible Analyses Framework and Tools), a framework for immuno-oncology biomarker development built with Python 3 and Nextflow DSL2 which aims to enable end-to-end reproducibility of entire computational analyses in multiple contexts (e.g. local, compute cluster, or cloud) with minimal overhead through a focus on usability (figures 1 and 2).MethodsRAFT builds upon Nextflow’s DSL2 module-based approach to workflows by providing a ‘project’ context upon which users can add metadata, load references, and build up their analysis step-by-step. RAFT also has pre-built modules with workflows commonly utilized in immuno-oncology analyses (e.g. TCR/BCR repertoire reconstruction and HLA typing) and aids users through automatic module dependency resolution. Transparency is gained by having a single end-to-end script containing all steps and parameters as well as a single configuration file. Finally, RAFT allows users to create and share a package of project metadata files including the main script, all input and output checksums, all modules, and the RAFT steps required to create the analysis. This package, coupled with any required inputs files, can be used to recreate the analysis or further expand an analysis with additional datasets or alternative parameters.ResultsRAFT has been used by our computational team to create an immuno-oncology meta-analysis submitted to SITC 2020. A simple, proof-of-concept analysis has been used to establish RAFT’s ability to support reproducibility by running locally on laptop computers, on multiple research compute clusters, and on the Google Cloud Platform.Abstract 485 Figure 1Example RAFT UsageUsers define their required inputs, build their analysis, and run their analysis using the RAFT command-line interface. The metadata from the analysis can then be shared through a RAFT package with collaborators or interested third-parties in order to reproduce or expand upon the initial results.Abstract 485 Figure 2End-to-end RAFTRAFT supports end-to-end analysis development through a ‘project’ structure. Users link local required files (e.g. FASTQs, references or manifests) into their appropriate/raft subdirectory. (1) Projects are initiated using the raft init-project command which creates and populates a project-specific directory. (2–3) Users then load required metadata (e.g. sample manifests or clinical data) and references (e.g. alignment references) into the project using the raft load-metadata or raft load-reference commands, respectively. (4) Modules consisting of tool-specific and topical workflows are cloned from a collection of remote repositories into the project using raft load-module. (5) Specific processes and workflows from previously loaded modules are added to the analysis (main.nf) through raft add-step. Users can then modify main.nf with their desired parameters and execute the workflow using raft run-workflow. (6) Additionally, RAFT allows an iterative approach where results from RAFT can be analyzed and modified through RStudio and re-run through Nextflow.ConclusionsThe RAFT platform shows promising capabilities to support rapid and reproducible research within the field of immuno-oncology. Several features remain in development and testing, such as incorporation of additional immunogenomics feature modules such as variant/fusion detection and HLA/peptide binding affinity estimation. Other functionality in development will enable collaborators to use remote Git repository hosting (e.g. GitHub or GitLab) to jointly and iteratively modify an analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Xu, Qianwen, Jeffery C. C. Lo et Shiwei Ricky Lee. « Characterization and Evaluation of 3D-Printed Connectors for Microfluidics ». Micromachines 12, no 8 (26 juillet 2021) : 874. http://dx.doi.org/10.3390/mi12080874.

Texte intégral
Résumé :
3D printing is regarded as a useful tool for the fabrication of microfluidic connectors to overcome the challenges of time consumption, clogging, poor alignment and bulky fixtures existing for current interconnections. 3D-printed connectors without any additional components can be directly printed to substrate with an orifice by UV-assisted coaxial printing. This paper further characterized and evaluated 3D-printed connectors fabricated by the proposed method. A process window with an operable combination of flow rates was identified. The outer flow rate could control the inner channel dimensions of 3D-printed connectors, which were expected to achieve less geometric mismatch of flow paths in microfluidic interfaces. The achieved smallest inner channel diameter was around 120 µm. Furthermore, the withstood pressure of 3D-printed connectors was evaluated to exceed 450 kPa, which could enable microfluidic chips to work at normal pressure.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Ozeel, Valentin, Aurélie Perrier, Anne Vanet et Michel Petitjean. « The Symmetric Difference Distance : A New Way to Evaluate the Evolution of Interfaces along Molecular Dynamics Trajectories ; Application to Influenza Hemagglutinin ». Symmetry 11, no 5 (12 mai 2019) : 662. http://dx.doi.org/10.3390/sym11050662.

Texte intégral
Résumé :
We propose a new and easy approach to evaluate structural dissimilarities between frames issued from molecular dynamics, and we test this methodology on human hemagglutinin. This protein is responsible for the entry of the influenza virus into the host cell by endocytosis, and this virus causes seasonal epidemics of infectious disease, which can be estimated to result in hundreds of thousands of deaths each year around the world. We computed the three interfaces between the three protomers of the hemagglutinin H1 homotrimer (PDB code: 1RU7) for each of its conformations generated from molecular dynamics simulation. For each conformation, we considered the set of residues involved in the union of these three interfaces. The dissimilarity between each pair of conformations was measured with our new methodology, the symmetric difference distance between the associated set of residues. The main advantages of the full procedure are: (i) it is parameter free; (ii) no spatial alignment is needed and (iii) it is simple enough so that it can be implemented by a beginner in programming. It is shown to be a relevant tool to follow the evolution of the conformation along the molecular dynamics trajectories.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Batailler, Cécile, John Swan, Elliot Sappey Marinier, Elvire Servien et Sébastien Lustig. « New Technologies in Knee Arthroplasty : Current Concepts ». Journal of Clinical Medicine 10, no 1 (25 décembre 2020) : 47. http://dx.doi.org/10.3390/jcm10010047.

Texte intégral
Résumé :
Total knee arthroplasty (TKA) is an effective treatment for severe osteoarthritis. Despite good survival rates, up to 20% of TKA patients remain dissatisfied. Recently, promising new technologies have been developed in knee arthroplasty, and could improve the functional outcomes. The aim of this paper was to present some new technologies in TKA, their current concepts, their advantages, and limitations. The patient-specific instrumentations can allow an improvement of implant positioning and limb alignment, but no difference is found for functional outcomes. The customized implants are conceived to reproduce the native knee anatomy and to reproduce its biomechanics. The sensors have to aim to give objective data on ligaments balancing during TKA. Few studies are published on the results at mid-term of these two devices currently. The accelerometers are smart tools developed to improve the TKA alignment. Their benefits remain yet controversial. The robotic-assisted systems allow an accurate and reproducible bone preparation due to a robotic interface, with a 3D surgical planning, based on preoperative 3D imaging or not. This promising system, nevertheless, has some limits. The new technologies in TKA are very attractive and have constantly evolved. Nevertheless, some limitations persist and could be improved by artificial intelligence and predictive modeling.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Jarlier, Frédéric, Nicolas Joly, Nicolas Fedy, Thomas Magalhaes, Leonor Sirotti, Paul Paganiban, Firmin Martin, Michael McManus et Philippe Hupé. « QUARTIC : QUick pArallel algoRithms for high-Throughput sequencIng data proCessing ». F1000Research 9 (23 juin 2020) : 240. http://dx.doi.org/10.12688/f1000research.22954.2.

Texte intégral
Résumé :
Life science has entered the so-called 'big data era' where biologists, clinicians and bioinformaticians are overwhelmed with high-throughput sequencing data. While they offer new insights to decipher the genome structure they also raise major challenges to use them for daily clinical practice care and diagnosis purposes as they are bigger and bigger. Therefore, we implemented a software to reduce the time to delivery for the alignment and the sorting of high-throughput sequencing data. Our solution is implemented using Message Passing Interface and is intended for high-performance computing architecture. The software scales linearly with respect to the size of the data and ensures a total reproducibility with the traditional tools. For example, a 300X whole genome can be aligned and sorted within less than 9 hours with 128 cores. The software offers significant speed-up using multi-cores and multi-nodes parallelization.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Jarlier, Frédéric, Nicolas Joly, Nicolas Fedy, Thomas Magalhaes, Leonor Sirotti, Paul Paganiban, Firmin Martin, Michael McManus et Philippe Hupé. « QUARTIC : QUick pArallel algoRithms for high-Throughput sequencIng data proCessing ». F1000Research 9 (8 octobre 2020) : 240. http://dx.doi.org/10.12688/f1000research.22954.3.

Texte intégral
Résumé :
Life science has entered the so-called 'big data era' where biologists, clinicians and bioinformaticians are overwhelmed with high-throughput sequencing data. While they offer new insights to decipher the genome structure they also raise major challenges to use them for daily clinical practice care and diagnosis purposes as they are bigger and bigger. Therefore, we implemented a software to reduce the time to delivery for the alignment and the sorting of high-throughput sequencing data. Our solution is implemented using Message Passing Interface and is intended for high-performance computing architecture. The software scales linearly with respect to the size of the data and ensures a total reproducibility with the traditional tools. For example, a 300X whole genome can be aligned and sorted within less than 9 hours with 128 cores. The software offers significant speed-up using multi-cores and multi-nodes parallelization.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Li, Rui, Kai Hu, Haibo Liu, Michael R. Green et Lihua Julie Zhu. « OneStopRNAseq : A Web Application for Comprehensive and Efficient Analyses of RNA-Seq Data ». Genes 11, no 10 (2 octobre 2020) : 1165. http://dx.doi.org/10.3390/genes11101165.

Texte intégral
Résumé :
Over the past decade, a large amount of RNA sequencing (RNA-seq) data were deposited in public repositories, and more are being produced at an unprecedented rate. However, there are few open source tools with point-and-click interfaces that are versatile and offer streamlined comprehensive analysis of RNA-seq datasets. To maximize the capitalization of these vast public resources and facilitate the analysis of RNA-seq data by biologists, we developed a web application called OneStopRNAseq for the one-stop analysis of RNA-seq data. OneStopRNAseq has user-friendly interfaces and offers workflows for common types of RNA-seq data analyses, such as comprehensive data-quality control, differential analysis of gene expression, exon usage, alternative splicing, transposable element expression, allele-specific gene expression quantification, and gene set enrichment analysis. Users only need to select the desired analyses and genome build, and provide a Gene Expression Omnibus (GEO) accession number or Dropbox links to sequence files, alignment files, gene-expression-count tables, or rank files with the corresponding metadata. Our pipeline facilitates the comprehensive and efficient analysis of private and public RNA-seq data.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie