To see the other types of publications on this topic, follow the link: Python ECG Analysis.

Journal articles on the topic 'Python ECG Analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Python ECG Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fedjajevs, Andrejs, Willemijn Groenendaal, Carlos Agell, and Evelien Hermeling. "Platform for Analysis and Labeling of Medical Time Series." Sensors 20, no. 24 (December 19, 2020): 7302. http://dx.doi.org/10.3390/s20247302.

Full text
Abstract:
Reliable and diverse labeled reference data are essential for the development of high-quality processing algorithms for medical signals, such as electrocardiogram (ECG) and photoplethysmogram (PPG). Here, we present the Platform for Analysis and Labeling of Medical time Series (PALMS) designed in Python. Its graphical user interface (GUI) facilitates three main types of manual annotations—(1) fiducials, e.g., R-peaks of ECG; (2) events with an adjustable duration, e.g., arrhythmic episodes; and (3) signal quality, e.g., data parts corrupted by motion artifacts. All annotations can be attributed to the same signal simultaneously in an ergonomic and user-friendly manner. Configuration for different data and annotation types is straightforward and flexible in order to use a wide range of data sources and to address many different use cases. Above all, configuration of PALMS allows plugging-in existing algorithms to display outcomes of automated processing, such as automatic R-peak detection, and to manually correct them where needed. This enables fast annotation and can be used to further improve algorithms. The GUI is currently complemented by ECG and PPG algorithms that detect characteristic points with high accuracy. The ECG algorithm reached 99% on the MIT/BIH arrhythmia database. The PPG algorithm was validated on two public databases with an F1-score above 98%. The GUI and optional algorithms result in an advanced software tool that allows the creation of diverse reference sets for existing datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Durán-Acevedo, Cristhian Manuel, Jeniffer Katerine Carrillo-Gómez, and Camilo Andrés Albarracín-Rojas. "Electronic Devices for Stress Detection in Academic Contexts during Confinement Because of the COVID-19 Pandemic." Electronics 10, no. 3 (January 27, 2021): 301. http://dx.doi.org/10.3390/electronics10030301.

Full text
Abstract:
This article studies the development and implementation of different electronic devices for measuring signals during stress situations, specifically in academic contexts in a student group of the Engineering Department at the University of Pamplona (Colombia). For the research’s development, devices for measuring physiological signals were used through a Galvanic Skin Response (GSR), the electrical response of the heart by using an electrocardiogram (ECG), the electrical activity produced by the upper trapezius muscle (EMG), and the development of an electronic nose system (E-nose) as a pilot study for the detection and identification of the Volatile Organic Compounds profiles emitted by the skin. The data gathering was taken during an online test (during the COVID-19 Pandemic), in which the aim was to measure the student’s stress state and then during the relaxation state after the exam period. Two algorithms were used for the data process, such as Linear Discriminant Analysis and Support Vector Machine through the Python software for the classification and differentiation of the assessment, achieving 100% of classification through GSR, 90% with the E-nose system proposed, 90% with the EMG system, and 88% success by using ECG, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Rodriguez-Torres, Erika, Alejandra Rosales-Lagarde, Carlos Fernando Chávez Vega, José Luis Ocaña Garrido, Yair Alejandro Pardo Rosales, and Rodrigo Silva Mota. "Análisis Fractal del Electroencefalograma Durante la Vigilia en Reposo de Adultos Mayores Hidalguenses y Deterioro Cognitivo." Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI 7, no. 14 (January 5, 2020): 10–13. http://dx.doi.org/10.29057/icbi.v7i14.4334.

Full text
Abstract:
Se sabe que el Análisis de Fluctuaciones sin Tendencia o DFA (Detrended Fluctuations Analysis, por sus siglas en inglés) de las series de tiempo biológicas como el Electrocardiograma (ECG), el Electroencefalograma (EEG) y otras resulta ser una herramienta útil para discriminar entre la salud o la enfermedad. Con el fin de corroborar si existen diferencias entre los DFA de Adultos Mayores (AM) con Deterioro Cognitivo (DC) y sin Deterioro Cognitivo (sDC), se realizaron DFA en dos sujetos, cada uno de ellos diagnosticado con DC o sDC. Se obtuvieron los registros de EEG y Electromiografía (EMG) durante la condición de vigilia con ojos cerrados. Se desarrolló una interfaz amigable en Python con la que se selecciona la serie de tiempo para el cálculo de los DFA. Se observó que en reposo con ojos cerrados, el sujeto con DC presenta mayores valores en regiones frontales que aquel sujeto sDC. Se concluye que el DFA aporta información cuantificable sobre la localización y mecanismos subyacentes al DC que pueden servir para ayudar a monitorear el curso del DC del AM.
APA, Harvard, Vancouver, ISO, and other styles
4

Kitzes, Justin, and Mark Wilber. "macroeco: reproducible ecological pattern analysis in Python." Ecography 39, no. 4 (January 14, 2016): 361–67. http://dx.doi.org/10.1111/ecog.01905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bao, Forrest Sheng, Xin Liu, and Christina Zhang. "PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction." Computational Intelligence and Neuroscience 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/406391.

Full text
Abstract:
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
6

Badenhorst, Melinda, Christopher J. Barry, Christiaan J. Swanepoel, Charles Theo van Staden, Julian Wissing, and Johann M. Rohwer. "Workflow for Data Analysis in Experimental and Computational Systems Biology: Using Python as ‘Glue’." Processes 7, no. 7 (July 18, 2019): 460. http://dx.doi.org/10.3390/pr7070460.

Full text
Abstract:
Bottom-up systems biology entails the construction of kinetic models of cellular pathways by collecting kinetic information on the pathway components (e.g., enzymes) and collating this into a kinetic model, based for example on ordinary differential equations. This requires integration and data transfer between a variety of tools, ranging from data acquisition in kinetics experiments, to fitting and parameter estimation, to model construction, evaluation and validation. Here, we present a workflow that uses the Python programming language, specifically the modules from the SciPy stack, to facilitate this task. Starting from raw kinetics data, acquired either from spectrophotometric assays with microtitre plates or from Nuclear Magnetic Resonance (NMR) spectroscopy time-courses, we demonstrate the fitting and construction of a kinetic model using scientific Python tools. The analysis takes place in a Jupyter notebook, which keeps all information related to a particular experiment together in one place and thus serves as an e-labbook, enhancing reproducibility and traceability. The Python programming language serves as an ideal foundation for this framework because it is powerful yet relatively easy to learn for the non-programmer, has a large library of scientific routines and active user community, is open-source and extensible, and many computational systems biology software tools are written in Python or have a Python Application Programming Interface (API). Our workflow thus enables investigators to focus on the scientific problem at hand rather than worrying about data integration between disparate platforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Trainor-Guitton, Whitney, Leo Turon, and Dominique Dubucq. "Python Earth Engine API as a new open-source ecosphere for characterizing offshore hydrocarbon seeps and spills." Leading Edge 40, no. 1 (January 2021): 35–44. http://dx.doi.org/10.1190/tle40010035.1.

Full text
Abstract:
The Python Earth Engine application programming interface (API) provides a new open-source ecosphere for testing hydrocarbon detection algorithms on large volumes of images curated with the Google Earth Engine. We specifically demonstrate the Python Earth Engine API by calculating three hydrocarbon indices: fluorescence, rotation absorption, and normalized fluorescence. The Python Earth Engine API provides an ideal environment for testing these indices with varied oil seeps and spills by (1) removing barriers of proprietary software formats and (2) providing an extensive library of data analysis tools (e.g., Pandas and Seaborn) and classification algorithms (e.g., Scikit-learn and TensorFlow). Our results demonstrate end-member cases in which fluorescence and normalized fluorescence indices of seawater and oil are statistically similar and different. As expected, predictive classification is more effective and the calculated probability of oil is more accurate for scenarios in which seawater and oil are well separated in the fluorescence space.
APA, Harvard, Vancouver, ISO, and other styles
8

Cadieux, Nicolas, Margaret Kalacska, Oliver T. Coomes, Mari Tanaka, and Yoshito Takasaki. "A Python Algorithm for Shortest-Path River Network Distance Calculations Considering River Flow Direction." Data 5, no. 1 (January 16, 2020): 8. http://dx.doi.org/10.3390/data5010008.

Full text
Abstract:
Vector based shortest path analysis in geographic information system (GIS) is well established for road networks. Even though these network algorithms can be applied to river layers, they do not generally consider the direction of flow. This paper presents a Python 3.7 program (upstream_downstream_shortests_path_dijkstra.py) that was specifically developed for river networks. It implements multiple single-source (one to one) weighted Dijkstra shortest path calculations, on a list of provided source and target nodes, and returns the route geometry, the total distance between each source and target node, and the total upstream and downstream distances for each shortest path. The end result is similar to what would be obtained by an “all-pairs” weighted Dijkstra shortest path algorithm. Contrary to an “all-pairs” Dijkstra, the algorithm only operates on the source and target nodes that were specified by the user and not on all of the nodes contained within the graph. For efficiency, only the upper distance matrix is returned (e.g., distance from node A to node B), while the lower distance matrix (e.g., distance from nodes B to A) is not. The program is intended to be used in a multiprocessor environment and relies on Python’s multiprocessing package.
APA, Harvard, Vancouver, ISO, and other styles
9

Lopez, F., G. Charbonnier, Y. Kermezli, M. Belhocine, Q. Ferré, N. Zweig, M. Aribi, A. Gonzalez, S. Spicuglia, and D. Puthier. "Explore, edit and leverage genomic annotations using Python GTF toolkit." Bioinformatics 35, no. 18 (February 15, 2019): 3487–88. http://dx.doi.org/10.1093/bioinformatics/btz116.

Full text
Abstract:
AbstractMotivationWhile Python has become very popular in bioinformatics, a limited number of libraries exist for fast manipulation of gene coordinates in Ensembl GTF format.ResultsWe have developed the GTF toolkit Python package (pygtftk), which aims at providing easy and powerful manipulation of gene coordinates in GTF format. For optimal performances, the core engine of pygtftk is a C dynamic library (libgtftk) while the Python API provides usability and readability for developing scripts. Based on this Python package, we have developed the gtftk command line interface that contains 57 sub-commands (v0.9.10) to ease handling of GTF files. These commands may be used to (i) perform basic tasks (e.g. selections, insertions, updates or deletions of features/keys), (ii) select genes/transcripts based on various criteria (e.g. size, exon number, transcription start site location, intron length, GO terms) or (iii) carry out more advanced operations such as coverage analyses of genomic features using bigWig files to create faceted read-coverage diagrams. In conclusion, the pygtftk package greatly simplifies the annotation of GTF files with external information while providing advance tools to perform gene analyses.Availability and implementationpygtftk and gtftk have been tested on Linux and MacOSX and are available from https://github.com/dputhier/pygtftk under the MIT license. The libgtftk dynamic library written in C is available from https://github.com/dputhier/libgtftk.
APA, Harvard, Vancouver, ISO, and other styles
10

Ono, Keiichiro, Tanja Muetze, Georgi Kolishovski, Paul Shannon, and Barry Demchak. "CyREST: Turbocharging Cytoscape Access for External Tools via a RESTful API." F1000Research 4 (August 5, 2015): 478. http://dx.doi.org/10.12688/f1000research.6767.1.

Full text
Abstract:
As bioinformatic workflows become increasingly complex and involve multiple specialized tools, so does the difficulty of reliably reproducing those workflows. Cytoscape is a critical workflow component for executing network visualization, analysis, and publishing tasks, but it can be operated only manually via a point-and-click user interface. Consequently, Cytoscape-oriented tasks are laborious and often error prone, especially with multistep protocols involving many networks.In this paper, we present the new cyREST Cytoscape app and accompanying harmonization libraries. Together, they improve workflow reproducibility and researcher productivity by enabling popular languages (e.g., Python and R, JavaScript, and C#) and tools (e.g., IPython/Jupyter Notebook and RStudio) to directly define and query networks, and perform network analysis, layouts and renderings. We describe cyREST’s API and overall construction, and present Python- and R-based examples that illustrate how Cytoscape can be integrated into large scale data analysis pipelines.cyREST is available in the Cytoscape app store (http://apps.cytoscape.org) where it has been downloaded over 1900 times since its release in late 2014.
APA, Harvard, Vancouver, ISO, and other styles
11

Siebert, Julien, Janek Groß, and Christof Schroth. "A Systematic Review of Packages for Time Series Analysis." Engineering Proceedings 5, no. 1 (June 28, 2021): 22. http://dx.doi.org/10.3390/engproc2021005022.

Full text
Abstract:
This paper presents a systematic review of Python packages with a focus on time series analysis. The objective is to provide (1) an overview of the different time series analysis tasks and preprocessing methods implemented, and (2) an overview of the development characteristics of the packages (e.g., documentation, dependencies, and community size). This review is based on a search of literature databases as well as GitHub repositories. Following the filtering process, 40 packages were analyzed. We classified the packages according to the analysis tasks implemented, the methods related to data preparation, and the means for evaluating the results produced (methods and access to evaluation data). We also reviewed documentation aspects, the licenses, the size of the packages’ community, and the dependencies used. Among other things, our results show that forecasting is by far the most frequently implemented task, that half of the packages provide access to real datasets or allow generating synthetic data, and that many packages depend on a few libraries (the most used ones being numpy, scipy and pandas). We hope that this review can help practitioners and researchers navigate the space of Python packages dedicated to time series analysis. We also provide an updated list of the reviewed packages online.
APA, Harvard, Vancouver, ISO, and other styles
12

Wolfe, Franklin D., Timothy A. Stahl, Pilar Villamor, and Biljana Lukovic. "Short communication: A semiautomated method for bulk fault slip analysis from topographic scarp profiles." Earth Surface Dynamics 8, no. 1 (March 24, 2020): 211–19. http://dx.doi.org/10.5194/esurf-8-211-2020.

Full text
Abstract:
Abstract. Manual approaches for analyzing fault scarps in the field or with existing software can be tedious and time-consuming. Here, we introduce an open-source, semiautomated, Python-based graphical user interface (GUI) called the Monte Carlo Slip Statistics Toolkit (MCSST) for estimating dip slip on individual or bulk fault datasets that (1) makes the analysis of a large number of profiles much faster, (2) allows users with little or no coding skills to implement the necessary statistical techniques, (3) and provides geologists with a platform to incorporate their observations or expertise into the process. Using this toolkit, profiles are defined across fault scarps in high-resolution digital elevation models (DEMs), and then relevant fault scarp components are interactively identified (e.g., footwall, hanging wall, and scarp). Displacement statistics are calculated automatically using Monte Carlo simulation and can be conveniently visualized in geographic information systems (GISs) for spatial analysis. Fault slip rates can also be calculated when ages of footwall and hanging wall surfaces are known, allowing for temporal analysis. This method allows for the analysis of tens to hundreds of faults in rapid succession within GIS and a Python coding environment. Application of this method may contribute to a wide range of regional and local earthquake geology studies with adequate high-resolution DEM coverage, enabling both regional fault source characterization for seismic hazard and/or estimating geologic slip and strain rates, including creating long-term deformation maps. ArcGIS versions of these functions are available, as well as ones that utilize free, open-source Quantum GIS (QGIS) and Jupyter Notebook Python software.
APA, Harvard, Vancouver, ISO, and other styles
13

Cieślik, Marcin, Zygmunt S. Derewenda, and Cameron Mura. "Abstractions, algorithms and data structures for structural bioinformatics inPyCogent." Journal of Applied Crystallography 44, no. 2 (February 11, 2011): 424–28. http://dx.doi.org/10.1107/s0021889811004481.

Full text
Abstract:
To facilitate flexible and efficient structural bioinformatics analyses, new functionality for three-dimensional structure processing and analysis has been introduced intoPyCogent– a popular feature-rich framework for sequence-based bioinformatics, but one which has lacked equally powerful tools for handling stuctural/coordinate-based data. Extensible Python modules have been developed, which provide object-oriented abstractions (based on a hierarchical representation of macromolecules), efficient data structures (e.g.kD-trees), fast implementations of common algorithms (e.g.surface-area calculations), read/write support for Protein Data Bank-related file formats and wrappers for external command-line applications (e.g. Stride). Integration of this code intoPyCogentis symbiotic, allowing sequence-based work to benefit from structure-derived data and, reciprocally, enabling structural studies to leveragePyCogent's versatile tools for phylogenetic and evolutionary analyses.
APA, Harvard, Vancouver, ISO, and other styles
14

Padulano, Vincenzo Eduardo, Javier Cervantes Villanueva, Enrico Guiraud, and Enric Tejedor Saavedra. "Distributed data analysis with ROOT RDataFrame." EPJ Web of Conferences 245 (2020): 03009. http://dx.doi.org/10.1051/epjconf/202024503009.

Full text
Abstract:
Widespread distributed processing of big datasets has been around for more than a decade now thanks to Hadoop, but only recently higher-level abstractions have been proposed for programmers to easily operate on those datasets, e.g. Spark. ROOT has joined that trend with its RDataFrame tool for declarative analysis, which currently supports local multi-threaded parallelisation. However, RDataFrame’s programming model is general enough to accommodate multiple implementations or backends: users could write their code once and execute it as-is locally or distributedly, just by selecting the corresponding backend. This abstract introduces PyRDF, a new python library developed on top of RDataFrame to seamlessly switch from local to distributed environments with no changes in the application code. In addition, PyRDF has been integrated with a service for web-based analysis, SWAN, where users can dynamically plug in new resources, as well as write, execute, monitor and debug distributed applications via an intuitive interface.
APA, Harvard, Vancouver, ISO, and other styles
15

Sun, Weijia, and Brian L. N. Kennett. "Common-Reflection-Point-Based Prestack Depth Migration for Imaging Lithosphere in Python: Application to the Dense Warramunga Array in Northern Australia." Seismological Research Letters 91, no. 5 (July 15, 2020): 2890–99. http://dx.doi.org/10.1785/0220200078.

Full text
Abstract:
Abstract We exploit estimates of P-wave reflectivity from autocorrelation of transmitted teleseismic P arrivals and their coda in a common reflection point (CRP) migration technique. The approach employs the same portion of the vertical-component seismogram, as in standard Ps receiver function analysis. This CRP prestack depth migration approach has the potential to image lithospheric structures on scales as fine as 4 km or less. The P-wave autocorrelation process and migration are implemented in open-source software—the autocorrelogram calculation (ACC) package, which builds on the widely used the seismological Obspy toolbox. The ACC package is written in the open-source and free Python programming language (3.0 or newer) and has been extensively tested in an Anaconda Python environment. The package is simple and friendly to use and runs on all major operating systems (e.g., Windows, macOS, and Linux). We utilize Python multiprocessing parallelism to speed up the ACC on a personal computer system, or servers, with multiple cores and threads. The application of the ACC package is illustrated with application to the closely spaced Warramunga array in northern Australia. The results show how fine-scale structures in the lithospheric can be effectively imaged at relatively high frequencies. The Moho ties well with conventional H−κ receiver analysis and deeper structure inferred from stacked autocorrelograms for continuous data. CRP prestack depth migration provides an important complement to common conversion point receiver function stacks, since it is less affected by surface multiples at lithospheric depths.
APA, Harvard, Vancouver, ISO, and other styles
16

Appriou, Aurélien, Léa Pillette, David Trocellier, Dan Dutartre, Andrzej Cichocki, and Fabien Lotte. "BioPyC, an Open-Source Python Toolbox for Offline Electroencephalographic and Physiological Signals Classification." Sensors 21, no. 17 (August 26, 2021): 5740. http://dx.doi.org/10.3390/s21175740.

Full text
Abstract:
Research on brain–computer interfaces (BCIs) has become more democratic in recent decades, and experiments using electroencephalography (EEG)-based BCIs has dramatically increased. The variety of protocol designs and the growing interest in physiological computing require parallel improvements in processing and classification of both EEG signals and bio signals, such as electrodermal activity (EDA), heart rate (HR) or breathing. If some EEG-based analysis tools are already available for online BCIs with a number of online BCI platforms (e.g., BCI2000 or OpenViBE), it remains crucial to perform offline analyses in order to design, select, tune, validate and test algorithms before using them online. Moreover, studying and comparing those algorithms usually requires expertise in programming, signal processing and machine learning, whereas numerous BCI researchers come from other backgrounds with limited or no training in such skills. Finally, existing BCI toolboxes are focused on EEG and other brain signals but usually do not include processing tools for other bio signals. Therefore, in this paper, we describe BioPyC, a free, open-source and easy-to-use Python platform for offline EEG and biosignal processing and classification. Based on an intuitive and well-guided graphical interface, four main modules allow the user to follow the standard steps of the BCI process without any programming skills: (1) reading different neurophysiological signal data formats, (2) filtering and representing EEG and bio signals, (3) classifying them, and (4) visualizing and performing statistical tests on the results. We illustrate BioPyC use on four studies, namely classifying mental tasks, the cognitive workload, emotions and attention states from EEG signals.
APA, Harvard, Vancouver, ISO, and other styles
17

McKenzie, Patrick F., and Deren A. R. Eaton. "ipcoal: an interactive Python package for simulating and analyzing genealogies and sequences on a species tree or network." Bioinformatics 36, no. 14 (May 12, 2020): 4193–96. http://dx.doi.org/10.1093/bioinformatics/btaa486.

Full text
Abstract:
Abstract Summary ipcoal is a free and open source Python package for simulating and analyzing genealogies and sequences. It automates the task of describing complex demographic models (e.g. with divergence times, effective population sizes, migration events) to the msprime coalescent simulator by parsing a user-supplied species tree or network. Genealogies, sequences and metadata are returned in tabular format allowing for easy downstream analyses. ipcoal includes phylogenetic inference tools to automate gene tree inference from simulated sequence data, and visualization tools for analyzing results and verifying model accuracy. The ipcoal package is a powerful tool for posterior predictive data analysis, for methods validation and for teaching coalescent methods in an interactive and visual environment. Availability and implementation Source code is available from the GitHub repository (https://github.com/pmckenz1/ipcoal/) and is distributed for packaged installation with conda. Complete documentation and interactive notebooks prepared for teaching purposes, including an empirical example, are available at https://ipcoal.readthedocs.io/. Contact p.mckenzie@columbia.edu
APA, Harvard, Vancouver, ISO, and other styles
18

Larson, David E., Haley J. Abel, Colby Chiang, Abhijit Badve, Indraniel Das, James M. Eldred, Ryan M. Layer, and Ira M. Hall. "svtools: population-scale analysis of structural variation." Bioinformatics 35, no. 22 (June 20, 2019): 4782–87. http://dx.doi.org/10.1093/bioinformatics/btz492.

Full text
Abstract:
Abstract Summary Large-scale human genetics studies are now employing whole genome sequencing with the goal of conducting comprehensive trait mapping analyses of all forms of genome variation. However, methods for structural variation (SV) analysis have lagged far behind those for smaller scale variants, and there is an urgent need to develop more efficient tools that scale to the size of human populations. Here, we present a fast and highly scalable software toolkit (svtools) and cloud-based pipeline for assembling high quality SV maps—including deletions, duplications, mobile element insertions, inversions and other rearrangements—in many thousands of human genomes. We show that this pipeline achieves similar variant detection performance to established per-sample methods (e.g. LUMPY), while providing fast and affordable joint analysis at the scale of ≥100 000 genomes. These tools will help enable the next generation of human genetics studies. Availability and implementation svtools is implemented in Python and freely available (MIT) from https://github.com/hall-lab/svtools. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
19

Baumgart, M., N. Druml, and M. Consani. "PROCEDURE ENABLING SIMULATION AND IN-DEPTH ANALYSIS OF OPTICAL EFFECTS IN CAMERA-BASED TIME-OF-FLIGHT SENSORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 83–89. http://dx.doi.org/10.5194/isprs-archives-xlii-2-83-2018.

Full text
Abstract:
This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.
APA, Harvard, Vancouver, ISO, and other styles
20

Mullissa, Adugna, Andreas Vollrath, Christelle Odongo-Braun, Bart Slagter, Johannes Balling, Yaqing Gou, Noel Gorelick, and Johannes Reiche. "Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine." Remote Sensing 13, no. 10 (May 17, 2021): 1954. http://dx.doi.org/10.3390/rs13101954.

Full text
Abstract:
Sentinel-1 satellites provide temporally dense and high spatial resolution synthetic aperture radar (SAR) imagery. The open data policy and global coverage of Sentinel-1 make it a valuable data source for a wide range of SAR-based applications. In this regard, the Google Earth Engine is a key platform for large area analysis with preprocessed Sentinel-1 backscatter images available within a few days after acquisition. To preserve the information content and user freedom, some preprocessing steps (e.g., speckle filtering) are not applied on the ingested Sentinel-1 imagery as they can vary by application. In this technical note, we present a framework for preparing Sentinel-1 SAR backscatter Analysis-Ready-Data in the Google Earth Engine that combines existing and new Google Earth Engine implementations for additional border noise correction, speckle filtering and radiometric terrain normalization. The proposed framework can be used to generate Sentinel-1 Analysis-Ready-Data suitable for a wide range of land and inland water applications. The Analysis Ready Data preparation framework is implemented in the Google Earth Engine JavaScript and Python APIs.
APA, Harvard, Vancouver, ISO, and other styles
21

Breddels, Maarten A., and Jovan Veljanoski. "Vaex: big data exploration in the era of Gaia." Astronomy & Astrophysics 618 (October 2018): A13. http://dx.doi.org/10.1051/0004-6361/201732493.

Full text
Abstract:
We present a new Python library, called vaex, intended to handle extremely large tabular datasets such as astronomical catalogues like the Gaia catalogue, N-body simulations, or other datasets which can be structured in rows and columns. Fast computations of statistics on regular N-dimensional grids allows analysis and visualization in the order of a billion rows per second, for a high-end desktop computer. We use streaming algorithms, memory mapped files, and a zero memory copy policy to allow exploration of datasets larger than memory, for example out-of-core algorithms. Vaex allows arbitrary (mathematical) transformations using normal Python expressions and (a subset of) numpy functions which are “lazily” evaluated and computed when needed in small chunks, which avoids wasting of memory. Boolean expressions (which are also lazily evaluated) can be used to explore subsets of the data, which we call selections. Vaex uses a similar DataFrame API as Pandas, a very popular library, which helps migration from Pandas. Visualization is one of the key points of vaex, and is done using binned statistics in 1d (e.g. histogram), in 2d (e.g. 2d histograms with colourmapping) and 3d (using volume rendering). Vaex is split in in several packages: vaex-core for the computational part, vaex-viz for visualization mostly based on matplotlib, vaex-jupyter for visualization in the Jupyter notebook/lab based in IPyWidgets, vaex-server for the (optional) client-server communication, vaex-ui for the Qt based interface, vaex-hdf5 for HDF5 based memory mapped storage, vaex-astro for astronomy related selections, transformations, and memory mapped (column based) FITS storage.
APA, Harvard, Vancouver, ISO, and other styles
22

Favre-Nicolin, Vincent, Gaétan Girard, Steven Leake, Jerome Carnis, Yuriy Chushkin, Jerome Kieffer, Pierre Paleo, and Marie-Ingrid Richard. "PyNX: high-performance computing toolkit for coherent X-ray imaging based on operators." Journal of Applied Crystallography 53, no. 5 (September 29, 2020): 1404–13. http://dx.doi.org/10.1107/s1600576720010985.

Full text
Abstract:
The open-source PyNX toolkit has been extended to provide tools for coherent X-ray imaging data analysis and simulation. All calculations can be executed on graphical processing units (GPUs) to achieve high-performance computing speeds. The toolkit can be used for coherent diffraction imaging (CDI), ptychography and wavefront propagation, in the far- or near-field regime. Moreover, all imaging operations (propagation, projections, algorithm cycles…) can be implemented in Python as simple mathematical operators, an approach which can be used to easily combine basic algorithms in a tailored chain. Calculations can also be distributed to multiple GPUs, e.g. for large ptychography data sets. Command-line scripts are available for on-line CDI and ptychography analysis, either from raw beamline data sets or using the coherent X-ray imaging data format.
APA, Harvard, Vancouver, ISO, and other styles
23

Abdullah, Saifuddin, and Dr Fuad Al-Najjar. "A Collective Statistical Analysis of Outdoor Path Loss Models." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 3, no. 1 (August 1, 2012): 6–10. http://dx.doi.org/10.24297/ijct.v3i1a.2720.

Full text
Abstract:
This study encompasses nine path loss models (Erceg-Greenstein, Green-Obaidat, COST Hata, Hata Urban, Hata Rural, Hata Suburban, SUI, Egli and ECC-33) which were programmed on Python and studied for their results in an urban architecture (translated by higher attenuation variables) at 950 MHz and 1800 MHz. The results obtained showed that increasing the transmission antenna height with the increasing distance not only lowers down the path loss readings, but also shows that the standard deviation between the results of studied path loss models increases with the increasing transmission antenna height and increasing distance at both 950 MHz and 1800 MHz systems, especially when transmission antenna height crosses the GSM standard of 40 meters and cell-radius exceeds the limit of 20 kilometers. Moreover, it is also observed that at both 950 MHz and 1800 MHz, the path loss readings of all the models disperse from their collective mean between 1 and 10 Km, but tend converge afterwards (i.e. from 10 to 40 Km and onwards) towards their mean, which indicates that path loss readings of the urban models tend to follow either a single convergence point on large distances or reach their maximum threshold level (a level from which their readings cannot exceed or differ from each other significantly).
APA, Harvard, Vancouver, ISO, and other styles
24

Garcia-Milian, Rolando, Denise Hersey, Milica Vukmirovic, and Fanny Duprilot. "Data challenges of biomedical researchers in the age of omics." PeerJ 6 (September 11, 2018): e5553. http://dx.doi.org/10.7717/peerj.5553.

Full text
Abstract:
Background High-throughput technologies are rapidly generating large amounts of diverse omics data. Although this offers a great opportunity, it also poses great challenges as data analysis becomes more complex. The purpose of this study was to identify the main challenges researchers face in analyzing data, and how academic libraries can support them in this endeavor. Methods A multimodal needs assessment analysis combined an online survey sent to 860 Yale-affiliated researchers (176 responded) and 15 in-depth one-on-one semi-structured interviews. Interviews were recorded, transcribed, and analyzed using NVivo 10 software according to the thematic analysis approach. Results The survey response rate was 20%. Most respondents (78%) identified lack of adequate data analysis training (e.g., R, Python) as a main challenge, in addition to not having the proper database or software (54%) to expedite analysis. Two main themes emerged from the interviews: personnel and training needs. Researchers feel they could improve data analyses practices by having better access to the appropriate bioinformatics expertise, and/or training in data analyses tools. They also reported lack of time to acquire expertise in using bioinformatics tools and poor understanding of the resources available to facilitate analysis. Conclusions The main challenges identified by our study are: lack of adequate training for data analysis (including need to learn scripting language), need for more personnel at the University to provide data analysis and training, and inadequate communication between bioinformaticians and researchers. The authors identified the positive impact of medical and/or science libraries by establishing bioinformatics support to researchers.
APA, Harvard, Vancouver, ISO, and other styles
25

Arcila-Agudelo, Ana María, Juan Carlos Muñoz-Mora, and Andreu Farran-Codina. "Introducing the Facility List Coder: A New Dataset/Method to Evaluate Community Food Environments." Data 5, no. 1 (March 10, 2020): 23. http://dx.doi.org/10.3390/data5010023.

Full text
Abstract:
Community food environments have been shown to be important determinants to explain dietary patterns. This data descriptor describes a typical dataset obtained after applying the Facility List Coder (FLC), a new tool to asses community food environments that was validated and presented. The FLC was developed in Python 3.7 combining GIS analysis with standard data techniques. It offers a low-cost, scalable, efficient, and user-friendly way to indirectly identify community nutritional environments in any context. The FLC uses the most open access information to identify the facilities (e.g., convenience food store, bar, bakery, etc.) present around a location of interest (e.g., school, hospital, or university). As a result, researchers will have a comprehensive list of facilities around any location of interest allowing the assessment of key research questions on the influence of the community food environment on different health outcomes (e.g., obesity, physical inactivity, or diet quality). The FLC can be used either as a main source of information or to complement traditional methods such as store census and official commercial lists, among others.
APA, Harvard, Vancouver, ISO, and other styles
26

Mannige, Ranjan. "The BackMAP Python module: how a simpler Ramachandran number can simplify the life of a protein simulator." PeerJ 6 (October 16, 2018): e5745. http://dx.doi.org/10.7717/peerj.5745.

Full text
Abstract:
Protein backbones occupy diverse conformations, but compact metrics to describe such conformations and transitions between them have been missing. This report re-introduces the Ramachandran number (ℛ) as a residue-level structural metric that could simply the life of anyone contending with large numbers of protein backbone conformations (e.g., ensembles from NMR and trajectories from simulations). Previously, the Ramachandran number (ℛ) was introduced using a complicated closed form, which made the Ramachandran number difficult to implement. This report discusses a much simpler closed form of ℛ that makes it much easier to calculate, thereby making it easy to implement. Additionally, this report discusses how ℛ dramatically reduces the dimensionality of the protein backbone, thereby making it ideal for simultaneously interrogating large numbers of protein structures. For example, 200 distinct conformations can easily be described in one graphic using ℛ (rather than 200 distinct Ramachandran plots). Finally, a new Python-based backbone analysis tool—BackMAP—is introduced, which reiterates how ℛ can be used as a simple and succinct descriptor of protein backbones and their dynamics.
APA, Harvard, Vancouver, ISO, and other styles
27

Mersmann, Sophia F., Léonie Strömich, Florian J. Song, Nan Wu, Francesca Vianello, Mauricio Barahona, and Sophia N. Yaliraki. "ProteinLens: a web-based application for the analysis of allosteric signalling on atomistic graphs of biomolecules." Nucleic Acids Research 49, W1 (May 12, 2021): W551—W558. http://dx.doi.org/10.1093/nar/gkab350.

Full text
Abstract:
Abstract The investigation of allosteric effects in biomolecular structures is of great current interest in diverse areas, from fundamental biological enquiry to drug discovery. Here we present ProteinLens, a user-friendly and interactive web application for the investigation of allosteric signalling based on atomistic graph-theoretical methods. Starting from the PDB file of a biomolecule (or a biomolecular complex) ProteinLens obtains an atomistic, energy-weighted graph description of the structure of the biomolecule, and subsequently provides a systematic analysis of allosteric signalling and communication across the structure using two computationally efficient methods: Markov Transients and bond-to-bond propensities. ProteinLens scores and ranks every bond and residue according to the speed and magnitude of the propagation of fluctuations emanating from any site of choice (e.g. the active site). The results are presented through statistical quantile scores visualised with interactive plots and adjustable 3D structure viewers, which can also be downloaded. ProteinLens thus allows the investigation of signalling in biomolecular structures of interest to aid the detection of allosteric sites and pathways. ProteinLens is implemented in Python/SQL and freely available to use at: www.proteinlens.io.
APA, Harvard, Vancouver, ISO, and other styles
28

Iturbide, Maialen, José M. Gutiérrez, Lincoln M. Alves, Joaquín Bedia, Ruth Cerezo-Mota, Ezequiel Cimadevilla, Antonio S. Cofiño, et al. "An update of IPCC climate reference regions for subcontinental analysis of climate model data: definition and aggregated datasets." Earth System Science Data 12, no. 4 (November 18, 2020): 2959–70. http://dx.doi.org/10.5194/essd-12-2959-2020.

Full text
Abstract:
Abstract. Several sets of reference regions have been used in the literature for the regional synthesis of observed and modelled climate and climate change information. A popular example is the series of reference regions used in the Intergovernmental Panel on Climate Change (IPCC) Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Adaptation (SREX). The SREX regions were slightly modified for the Fifth Assessment Report of the IPCC and used for reporting subcontinental observed and projected changes over a reduced number (33) of climatologically consistent regions encompassing a representative number of grid boxes. These regions are intended to allow analysis of atmospheric data over broad land or ocean regions and have been used as the basis for several popular spatially aggregated datasets, such as the Seasonal Mean Temperature and Precipitation in IPCC Regions for CMIP5 dataset. We present an updated version of the reference regions for the analysis of new observed and simulated datasets (including CMIP6) which offer an opportunity for refinement due to the higher atmospheric model resolution. As a result, the number of land and ocean regions is increased to 46 and 15, respectively, better representing consistent regional climate features. The paper describes the rationale for the definition of the new regions and analyses their homogeneity. The regions are defined as polygons and are provided as coordinates and a shapefile together with companion R and Python notebooks to illustrate their use in practical problems (e.g. calculating regional averages). We also describe the generation of a new dataset with monthly temperature and precipitation, spatially aggregated in the new regions, currently for CMIP5 and CMIP6, to be extended to other datasets in the future (including observations). The use of these reference regions, dataset and code is illustrated through a worked example using scatter plots to offer guidance on the likely range of future climate change at the scale of the reference regions. The regions, datasets and code (R and Python notebooks) are freely available at the ATLAS GitHub repository: https://github.com/SantanderMetGroup/ATLAS (last access: 24 August 2020), https://doi.org/10.5281/zenodo.3998463 (Iturbide et al., 2020).
APA, Harvard, Vancouver, ISO, and other styles
29

Carrasco Kind, Matias, Mantas Zurauskas, Aneesh Alex, Marina Marjanovic, Prabuddha Mukherjee, Minh Doan, Darold R. Spillman Jr., Steve Hood, and Stephen A. Boppart. "flimview : A software framework to handle, visualize and analyze FLIM data." F1000Research 9 (June 8, 2020): 574. http://dx.doi.org/10.12688/f1000research.24006.1.

Full text
Abstract:
flimview is a bio-imaging Python software package to read, explore, manage and visualize Fluorescence-Lifetime Imaging Microscopy (FLIM) images. It can open the standard FLIM data file conventions (e.g., sdt and ptu) and processes them from the raw format to a more readable and manageable binned and fitted format. It allows customized kernels for binning the data as well as user defined masking operations for pre-processing the images. It also allows customized fluorescence decay fitting functions and preserves all of the metadata generated for provenance and reproducibility. Outcomes from the analysis are lossless compressed and stored in an efficient way providing the necessary open-source tools to access and explore the data. flimview is open source and includes example data, example Jupyter notebooks and tutorial documentation. The package, test data and documentation are available on Github.
APA, Harvard, Vancouver, ISO, and other styles
30

Nabaei, Sina, and Bahram Saghafian. "Cellular time series: a data structure for spatio-temporal analysis and management of geoscience information." Journal of Hydroinformatics 21, no. 6 (September 30, 2019): 999–1013. http://dx.doi.org/10.2166/hydro.2019.012.

Full text
Abstract:
Abstract Geoscientists are continuously confronted by difficulties involved in handling varieties of data formats. Configuration of data only in time or space domains leads to the use of multiple stand-alone software in the spatio-temporal analysis which is a time-consuming approach. In this paper, the concept of cellular time series (CTS) and three types of meta data are introduced to improve the handling of CTS in the spatio-temporal analysis. The data structure was designed via Python programming language; however, the structure could also be implemented by other languages (e.g., R and MATLAB). We used this concept in the hydro-meteorological discipline. In our application, CTS of monthly precipitation was generated by employing data of 102 stations across Iran. The non-parametric Mann–Kendall trend test and change point detection techniques, including Pettitt's test, standard normal homogeneity test, and the Buishand range test were applied on the generated CTS. Results revealed a negative annual trend in the eastern parts, as well as being sporadically spread over the southern and western parts of the country. Furthermore, the year 1998 was detected as a significant change year in the eastern and southern regions of Iran. The proposed structure may be used by geoscientists and data providers for straightforward simultaneous spatio-temporal analysis.
APA, Harvard, Vancouver, ISO, and other styles
31

Kensek, Karen M. "TEACHING VISUAL SCRIPTING IN BIM: A CASE STUDY USING A PANEL CONTROLLED BY SOLAR ANGLES." Journal of Green Building 13, no. 1 (January 2018): 113–38. http://dx.doi.org/10.3992/1943-4618.13.1.113.

Full text
Abstract:
Programming and scripting can be used to activate a 3D parametric model to create a more intelligent and flexible building information model. There has been a trend in the building industry towards the use of visual scripting that allow users to create customized, flexible, and powerful programs without having to first learn how to write traditional code. Using visual scripting, users graphically interact with program elements instead of typing lines of text-based code. Nodes are created and virtually wired together; they can be numbers, sliders for adjusting values, operators and functions, list manipulation tools, graphic creators, and other types. Text based coding programs such as Python can also be used for the creation of custom nodes when greater flexibility is desired. Examples from professional firms include scripts that help automate work in the office to increase efficiency and accuracy (e.g. create escape routes, renumber rooms by levels, create documentation), assist in form generation (e.g. parametric design of metal panels, rebar generation, coordination between Revit and Rhino), analyze BIM files (e.g. terminal airflow, visual loads and capacity), and provide analysis results (e.g. daylighting, thermal comfort, window optimization). One can learn the basic steps of learning a visual programming language through the use of Dynamo within Autodesk Revit. The example used is for a façade component that changes based on the sun's altitude.
APA, Harvard, Vancouver, ISO, and other styles
32

Pal, Soumitra, and Teresa M. Przytycka. "Bioinformatics pipeline using JUDI: Just Do It!" Bioinformatics 36, no. 8 (December 27, 2019): 2572–74. http://dx.doi.org/10.1093/bioinformatics/btz956.

Full text
Abstract:
Abstract Summary Large-scale data analysis in bioinformatics requires pipelined execution of multiple software. Generally each stage in a pipeline takes considerable computing resources and several workflow management systems (WMS), e.g. Snakemake, Nextflow, Common Workflow Language, Galaxy, etc. have been developed to ensure optimum execution of the stages across two invocations of the pipeline. However, when the pipeline needs to be executed with different settings of parameters, e.g. thresholds, underlying algorithms, etc. these WMS require significant scripting to ensure an optimal execution. We developed JUDI on top of DoIt, a Python based WMS, to systematically handle parameter settings based on the principles of database management systems. Using a novel modular approach that encapsulates a parameter database in each task and file associated with a pipeline stage, JUDI simplifies plug-and-play of the pipeline stages. For a typical pipeline with n parameters, JUDI reduces the number of lines of scripting required by a factor of O(n). With properly designed parameter databases, JUDI not only enables reproducing research under published values of parameters but also facilitates exploring newer results under novel parameter settings. Availability and implementation https://github.com/ncbi/JUDI Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
33

Von Dreele, Robert. "Protein refinement with GSAS-II." Powder Diffraction 34, S1 (April 26, 2019): S32—S35. http://dx.doi.org/10.1017/s0885715619000204.

Full text
Abstract:
The General Structure Analysis System (GSAS)-II software package is a fully developed, open source, crystallographic data analysis system written almost entirely in Python. For powder diffraction, it encompasses the entire data analysis process beginning with 2-dimensonal image integration, peak selection, fitting and indexing, followed by intensity extraction, structure solution and ultimately Rietveld refinement, all driven by an intuitive graphical interface. Significant functionality of GSAS-II also can be scripted to allow it to be integrated into workflows or other software. For protein studies, it includes restraints on bond distances, angles, torsions, chiral volumes and coupled torsions (e.g. Ramachandran Φ/Ψ angles) each with graphical displays allowing visual validation. Each amino acid residue (and any ligands) can be represented by flexible rigid bodies with refinable internal torsions and optionally fully described TLS thermal motion. The least-squares algorithm invokes a Levenberg-Marquart minimization of a normalized double precision full matrix via Singular Value Decomposition providing fast convergence and high stability even for a large number of parameters. This paper will focus on the description of the flexible rigid body model of the protein and the details of the refinement algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Vodopyanov, Alexey S., Yury N. Khomyakov, Ruslan V. Pisanov, Angelina Yu Furina, Anton A. Lopatin, and Alexey K. Noskov. "Development of a program for automated recording of the results of polymerase chain reaction studies in real time in the conditions of a massive intake of biological material during the COVID-19 pandemic." Epidemiology and Infectious Diseases 25, no. 2 (November 23, 2020): 102–8. http://dx.doi.org/10.17816/eid46555.

Full text
Abstract:
With regard to the rapid spread of the latest coronavirus infection (COVID-19) in the Russian Federation in 2020, 70 workplaces were organized in Antiplague Center of Rospotrebnadzor and were seconded by specialists from the Rospotrebnadzor research antiplague institutes. However, the round-the-clock three-shift mode of operation significantly complicates the organization and documentation of the studies and increases the risk of errors. Subsequently in Antiplague Center of Rospotrebnadzor, we have conducted the work to automate the most problematic stages of conducting polymerase chain reaction (PCR) studies for the latest coronavirus infection and to develop an algorithm for real-time monitoring of the results. The development of our own software solutions was carried out in Python 3.8.2. The initial data for automation were.xlsx files automatically generated by the thermocycler software and typical tabular templates filled in at the sample analysis and RNA extraction stages. The software we developed consolidated the data into a single file register to detect potential errors simultaneously (e.g., the presence of duplicates, differences in the lists of samples at different stages, etc.). Using the Python scripting language provides cross-platform functionality (the ability to work in any operating system) and allows you to easily and quickly modify the system when changing any parameters or input file structure. Thus, 7 days were spent on the development and commissioning of this software complex, which is particularly important when working in an emergency and high alert mode. Therefore, using the approach we developed made it possible to more quickly detect technical errors, discordant results, and samples requiring re-examination, which in turn reduced the time for issuing results.
APA, Harvard, Vancouver, ISO, and other styles
35

Berzano, Dario, Chris Burr, Hans Beck, Violaine Bellée, Redmer Alexander Bertens, and Albert Puig Navarro. "Software training for the next generation of physicists: joint experience of LHCb and ALICE." EPJ Web of Conferences 214 (2019): 05044. http://dx.doi.org/10.1051/epjconf/201921405044.

Full text
Abstract:
The need for good software training is essential in the HEP community. Unfortunately, current training is non-homogeneous and the definition of a common baseline is unclear, making it difficult for newcomers to proficiently join large collaborations such as ALICE or LHCb. In the last years, both collaborations have started separate efforts to tackle this issue through training workshops, via Analysis Tutorials (organized by the ALICE Juniors since 2014) and the Starterkit (organized by LHCb students since 2015). In 2017, ALICE and LHCb have for the first time joined efforts to provide combined training by identifying common topics, such as version control systems (Git) and programming languages (e.g. Python). Given the positive experience and feedback, this collaboration will be repeated in the future. We will illustrate the teaching methods, experience and feedback from our first common training workshop. We will also discuss our efforts to extend our format to other HEP experiments for future iterations.
APA, Harvard, Vancouver, ISO, and other styles
36

Immer, Marc, and Philipp Georg Juretzko. "Advanced aircraft performance analysis." Aircraft Engineering and Aerospace Technology 90, no. 4 (May 8, 2018): 627–38. http://dx.doi.org/10.1108/aeat-11-2016-0205.

Full text
Abstract:
Purpose The preliminary aircraft design process comprises multiple disciplines. During performance analysis, parameters of the design mission have to be optimized. Mission performance optimization is often challenging, especially for complex mission profiles (e.g. for unmanned aerial vehicles [UAVs]) or hybrid-electric propulsion. Therefore, the purpose of this study is to find a methodology that supports aircraft performance analysis and that is applicable to complex profiles and to novel designs. Design/methodology/approach As its core element, the developed method uses a computationally efficient C++ software “Aircraft Performance Program” (APP), which performs a segment-based mission computation. APP performs a time integration of the equations of motion of a point mass in the vertical plane. APP is called via a command line interface from a flexible scripting language (Python). On top of APP’s internal radius of action optimization, state-of-the-art optimization packages (SciPy) are used. Findings The application of the method to a conventional climb schedule shows that the definition of the top of climb has a significant influence on the resulting optimum. Application of the method to a complex UAV mission optimization, which included maximizing the radius of action, was successful. Low computation time enables to perform large parametric studies. This greatly improves the interpretation of the results. Research limitations/implications The scope of the paper is limited to the methodology that allows for advanced performance analysis at the conceptual and preliminary design stages with an emphasis on novel propulsion concepts. The methodology is developed using existing, validated methods, and therefore, this paper does not contain comprehensive validation. Other disciplines, such as cost analysis, life-cycle assessment or market analysis, are not considered. Practical implications With the proposed method, it is possible to obtain not only the desired optimum mission performance but also off-design performance of the investigated design. A thorough analysis of the mission performance provides insight into the design’s capabilities and shortcomings, ultimately aiding in obtaining a more efficient design. Originality/value Recent developments in the area of hybrid or hybrid-electric propulsion systems have shown the need for performance computation tools aiding the related design process. The presented method is especially valuable when novel design concepts with complex mission profiles are investigated.
APA, Harvard, Vancouver, ISO, and other styles
37

Bigan, Erwan, Satish Sasidharan Nair, François-Xavier Lejeune, Hélissande Fragnaud, Frédéric Parmentier, Lucile Mégret, Marc Verny, Jeff Aaronson, Jim Rosinski, and Christian Neri. "Genetic cooperativity in multi-layer networks implicates cell survival and senescence in the striatum of Huntington’s disease mice synchronous to symptoms." Bioinformatics 36, no. 1 (June 22, 2019): 186–96. http://dx.doi.org/10.1093/bioinformatics/btz514.

Full text
Abstract:
Abstract Motivation Huntington’s disease (HD) may evolve through gene deregulation. However, the impact of gene deregulation on the dynamics of genetic cooperativity in HD remains poorly understood. Here, we built a multi-layer network model of temporal dynamics of genetic cooperativity in the brain of HD knock-in mice (allelic series of Hdh mice). To enhance biological precision and gene prioritization, we integrated three complementary families of source networks, all inferred from the same RNA-seq time series data in Hdh mice, into weighted-edge networks where an edge recapitulates path-length variation across source-networks and age-points. Results Weighted edge networks identify two consecutive waves of tight genetic cooperativity enriched in deregulated genes (critical phases), pre-symptomatically in the cortex, implicating neurotransmission, and symptomatically in the striatum, implicating cell survival (e.g. Hipk4) intertwined with cell proliferation (e.g. Scn4b) and cellular senescence (e.g. Cdkn2a products) responses. Top striatal weighted edges are enriched in modulators of defective behavior in invertebrate models of HD pathogenesis, validating their relevance to neuronal dysfunction in vivo. Collectively, these findings reveal highly dynamic temporal features of genetic cooperativity in the brain of Hdh mice where a 2-step logic highlights the importance of cellular maintenance and senescence in the striatum of symptomatic mice, providing highly prioritized targets. Availability and implementation Weighted edge network analysis (WENA) data and source codes for performing spectral decomposition of the signal (SDS) and WENA analysis, both written using Python, are available at http://www.broca.inserm.fr/HD-WENA/. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
38

Villaça, Caio Vidaurre Nassif, Alvaro Penteado Crósta, and Carlos Henrique Grohmann. "Morphometric Analysis of Pluto’s Impact Craters." Remote Sensing 13, no. 3 (January 22, 2021): 377. http://dx.doi.org/10.3390/rs13030377.

Full text
Abstract:
The scope of this work is to carry out a morphometric analysis of Pluto’s impact craters. A global Pluto digital elevation model (DEM) with a resolution of 300 m/px, created from stereoscopic pairs obtained by the New Horizons Mission, was used to extract the morphometric data of craters. Pluto’s surface was divided according to different morphometric characteristics in order to analyze possible differences in the impact dynamics and modification rate in each region. A Python code was developed, within the QGIS 3× software environment, to automate the process of crater outlining and collection of morphometric data: diameter (D), depth (d), depth variation, slope of the inner wall (Sw), diameter of the base (Db), and the width of the wall (Ww). Data have been successfully obtained for 237 impact craters on five distinct terrains over the west side of Sputnik Planitia on Pluto. With the collected data, it was possible to observe that craters near the equator (areas 3 and 4) are deeper than craters above 35°N (areas 1 and 2). Craters on the western regions (areas 2 and 3) contain the lowest depth values for a given diameter. The transition diameter from simple to complex crater morphology was found to change throughout the areas of study. Craters within areas 1 and 4 exhibit a transition diameter (Dt) of approximately 10 km, while Dt for craters within areas 3 and 5 the transitions occurs at 15 km approximately. The presence of volatile ices in the north and north-west regions may be the reason for the difference of morphometry between these two terrains of Pluto. Two hypotheses are presented to explain these differences: (1) The presence of volatile ices can affect the formation of craters by making the target surface weaker and more susceptible to major changes (e.g., mass waste and collapse of the walls) during the formation process until its final stage; (2) The high concentration of volatiles can affect the depth of the craters by atmospheric decantation, considering that these elements undergo seasonal decantation and sublimation cycles.
APA, Harvard, Vancouver, ISO, and other styles
39

Gao, Dasong, Paul R. Barber, Jenu V. Chacko, Md Abdul Kader Sagar, Curtis T. Rueden, Aivar R. Grislis, Mark C. Hiner, and Kevin W. Eliceiri. "FLIMJ: An open-source ImageJ toolkit for fluorescence lifetime image data analysis." PLOS ONE 15, no. 12 (December 30, 2020): e0238327. http://dx.doi.org/10.1371/journal.pone.0238327.

Full text
Abstract:
In the field of fluorescence microscopy, there is continued demand for dynamic technologies that can exploit the complete information from every pixel of an image. One imaging technique with proven ability for yielding additional information from fluorescence imaging is Fluorescence Lifetime Imaging Microscopy (FLIM). FLIM allows for the measurement of how long a fluorophore stays in an excited energy state, and this measurement is affected by changes in its chemical microenvironment, such as proximity to other fluorophores, pH, and hydrophobic regions. This ability to provide information about the microenvironment has made FLIM a powerful tool for cellular imaging studies ranging from metabolic measurement to measuring distances between proteins. The increased use of FLIM has necessitated the development of computational tools for integrating FLIM analysis with image and data processing. To address this need, we have created FLIMJ, an ImageJ plugin and toolkit that allows for easy use and development of extensible image analysis workflows with FLIM data. Built on the FLIMLib decay curve fitting library and the ImageJ Ops framework, FLIMJ offers FLIM fitting routines with seamless integration with many other ImageJ components, and the ability to be extended to create complex FLIM analysis workflows. Building on ImageJ Ops also enables FLIMJ’s routines to be used with Jupyter notebooks and integrate naturally with science-friendly programming in, e.g., Python and Groovy. We show the extensibility of FLIMJ in two analysis scenarios: lifetime-based image segmentation and image colocalization. We also validate the fitting routines by comparing them against industry FLIM analysis standards.
APA, Harvard, Vancouver, ISO, and other styles
40

Degeling, Koen, Maarten J. IJzerman, Mariel S. Lavieri, Mark Strong, and Hendrik Koffijberg. "Introduction to Metamodeling for Reducing Computational Burden of Advanced Analyses with Health Economic Models: A Structured Overview of Metamodeling Methods in a 6-Step Application Process." Medical Decision Making 40, no. 3 (April 2020): 348–63. http://dx.doi.org/10.1177/0272989x20912233.

Full text
Abstract:
Metamodels can be used to reduce the computational burden associated with computationally demanding analyses of simulation models, although applications within health economics are still scarce. Besides a lack of awareness of their potential within health economics, the absence of guidance on the conceivably complex and time-consuming process of developing and validating metamodels may contribute to their limited uptake. To address these issues, this article introduces metamodeling to the wider health economic audience and presents a process for applying metamodeling in this context, including suitable methods and directions for their selection and use. General (i.e., non–health economic specific) metamodeling literature, clinical prediction modeling literature, and a previously published literature review were exploited to consolidate a process and to identify candidate metamodeling methods. Methods were considered applicable to health economics if they are able to account for mixed (i.e., continuous and discrete) input parameters and continuous outcomes. Six steps were identified as relevant for applying metamodeling methods within health economics: 1) the identification of a suitable metamodeling technique, 2) simulation of data sets according to a design of experiments, 3) fitting of the metamodel, 4) assessment of metamodel performance, 5) conducting the required analysis using the metamodel, and 6) verification of the results. Different methods are discussed to support each step, including their characteristics, directions for use, key references, and relevant R and Python packages. To address challenges regarding metamodeling methods selection, a first guide was developed toward using metamodels to reduce the computational burden of analyses of health economic models. This guidance may increase applications of metamodeling in health economics, enabling increased use of state-of-the-art analyses (e.g., value of information analysis) with computationally burdensome simulation models.
APA, Harvard, Vancouver, ISO, and other styles
41

Chatterjee, Preetha, Kostadin Damevski, Nicholas A. Kraft, and Lori Pollock. "Automatically Identifying the Quality of Developer Chats for Post Hoc Use." ACM Transactions on Software Engineering and Methodology 30, no. 4 (July 2021): 1–28. http://dx.doi.org/10.1145/3450503.

Full text
Abstract:
Software engineers are crowdsourcing answers to their everyday challenges on Q&A forums (e.g., Stack Overflow) and more recently in public chat communities such as Slack, IRC, and Gitter. Many software-related chat conversations contain valuable expert knowledge that is useful for both mining to improve programming support tools and for readers who did not participate in the original chat conversations. However, most chat platforms and communities do not contain built-in quality indicators (e.g., accepted answers, vote counts). Therefore, it is difficult to identify conversations that contain useful information for mining or reading, i.e., conversations of post hoc quality. In this article, we investigate automatically detecting developer conversations of post hoc quality from public chat channels. We first describe an analysis of 400 developer conversations that indicate potential characteristics of post hoc quality, followed by a machine learning-based approach for automatically identifying conversations of post hoc quality. Our evaluation of 2,000 annotated Slack conversations in four programming communities (python, clojure, elm, and racket) indicates that our approach can achieve precision of 0.82, recall of 0.90, F-measure of 0.86, and MCC of 0.57. To our knowledge, this is the first automated technique for detecting developer conversations of post hoc quality.
APA, Harvard, Vancouver, ISO, and other styles
42

Vensko, Steven, Benjamin Vincent, and Dante Bortone. "485 RAFT: A framework to support rapid and reproducible immuno-oncology analyses." Journal for ImmunoTherapy of Cancer 8, Suppl 3 (November 2020): A521. http://dx.doi.org/10.1136/jitc-2020-sitc2020.0485.

Full text
Abstract:
BackgroundAnalysis reproducibility and transparency are pillars of robust and trustworthy scientific results. The dependability of these results is crucial in clinical settings where they may guide high-impact decisions affecting patient health. Independent reproduction of computational results has been problematic and can be a burden on the individuals attempting to reproduce the results. Reproduction complications may arise from: 1) insufficiently described parameters, 2) vague methods, or 3) secret scripts required to generate final outputs, among others. Here we introduce RAFT (Reproducible Analyses Framework and Tools), a framework for immuno-oncology biomarker development built with Python 3 and Nextflow DSL2 which aims to enable end-to-end reproducibility of entire computational analyses in multiple contexts (e.g. local, compute cluster, or cloud) with minimal overhead through a focus on usability (figures 1 and 2).MethodsRAFT builds upon Nextflow’s DSL2 module-based approach to workflows by providing a ‘project’ context upon which users can add metadata, load references, and build up their analysis step-by-step. RAFT also has pre-built modules with workflows commonly utilized in immuno-oncology analyses (e.g. TCR/BCR repertoire reconstruction and HLA typing) and aids users through automatic module dependency resolution. Transparency is gained by having a single end-to-end script containing all steps and parameters as well as a single configuration file. Finally, RAFT allows users to create and share a package of project metadata files including the main script, all input and output checksums, all modules, and the RAFT steps required to create the analysis. This package, coupled with any required inputs files, can be used to recreate the analysis or further expand an analysis with additional datasets or alternative parameters.ResultsRAFT has been used by our computational team to create an immuno-oncology meta-analysis submitted to SITC 2020. A simple, proof-of-concept analysis has been used to establish RAFT’s ability to support reproducibility by running locally on laptop computers, on multiple research compute clusters, and on the Google Cloud Platform.Abstract 485 Figure 1Example RAFT UsageUsers define their required inputs, build their analysis, and run their analysis using the RAFT command-line interface. The metadata from the analysis can then be shared through a RAFT package with collaborators or interested third-parties in order to reproduce or expand upon the initial results.Abstract 485 Figure 2End-to-end RAFTRAFT supports end-to-end analysis development through a ‘project’ structure. Users link local required files (e.g. FASTQs, references or manifests) into their appropriate/raft subdirectory. (1) Projects are initiated using the raft init-project command which creates and populates a project-specific directory. (2–3) Users then load required metadata (e.g. sample manifests or clinical data) and references (e.g. alignment references) into the project using the raft load-metadata or raft load-reference commands, respectively. (4) Modules consisting of tool-specific and topical workflows are cloned from a collection of remote repositories into the project using raft load-module. (5) Specific processes and workflows from previously loaded modules are added to the analysis (main.nf) through raft add-step. Users can then modify main.nf with their desired parameters and execute the workflow using raft run-workflow. (6) Additionally, RAFT allows an iterative approach where results from RAFT can be analyzed and modified through RStudio and re-run through Nextflow.ConclusionsThe RAFT platform shows promising capabilities to support rapid and reproducible research within the field of immuno-oncology. Several features remain in development and testing, such as incorporation of additional immunogenomics feature modules such as variant/fusion detection and HLA/peptide binding affinity estimation. Other functionality in development will enable collaborators to use remote Git repository hosting (e.g. GitHub or GitLab) to jointly and iteratively modify an analysis.
APA, Harvard, Vancouver, ISO, and other styles
43

Menegon, Stefano, Alessandro Sarretta, Daniel Depellegrin, Giulio Farella, Chiara Venier, and Andrea Barbanti. "Tools4MSP: an open source software package to support Maritime Spatial Planning." PeerJ Computer Science 4 (October 1, 2018): e165. http://dx.doi.org/10.7717/peerj-cs.165.

Full text
Abstract:
This paper presents the Tools4MSP software package, a Python-based Free and Open Source Software (FOSS) for geospatial analysis in support of Maritime Spatial Planning (MSP) and marine environmental management. The suite was initially developed within the ADRIPLAN data portal, that has been recently upgraded into the Tools4MSP Geoplatform (data.tools4msp.eu), an integrated web platform that supports MSP through the application of different tools, e.g., collaborative geospatial modelling of cumulative effects assessment (CEA) and marine use conflict (MUC) analysis. The package can be used as stand-alone library or as collaborative webtool, providing user-friendly interfaces appropriate to decision-makers, regional authorities, academics and MSP stakeholders. An effective MSP-oriented integrated system of web-based software, users and services is proposed. It includes four components: the Tools4MSP Geoplatform for interoperable and collaborative sharing of geospatial datasets and for MSP-oriented analysis, the Tools4MSP package as stand-alone library for advanced geospatial and statistical analysis, the desktop applications to simplify data curation and the third party data repositories for multidisciplinary and multilevel geospatial datasets integration. The paper presents an application example of the Tools4MSP GeoNode plugin and an example of Tools4MSP stand-alone library for CEA in the Adriatic Sea. The Tools4MSP and the developed software have been released as FOSS under the GPL 3 license and are currently under further development.
APA, Harvard, Vancouver, ISO, and other styles
44

Bratic, G., M. A. Brovelli, and M. E. Molinari. "A FREE AND OPEN SOURCE TOOL TO ASSESS THE ACCURACY OF LAND COVER MAPS: IMPLEMENTATION AND APPLICATION TO LOMBARDY REGION (ITALY)." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 87–92. http://dx.doi.org/10.5194/isprs-archives-xlii-3-87-2018.

Full text
Abstract:
The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.
APA, Harvard, Vancouver, ISO, and other styles
45

McLuskey, Karen, Joe Wandy, Isabel Vincent, Justin J. J. van der Hooft, Simon Rogers, Karl Burgess, and Rónán Daly. "Ranking Metabolite Sets by Their Activity Levels." Metabolites 11, no. 2 (February 11, 2021): 103. http://dx.doi.org/10.3390/metabo11020103.

Full text
Abstract:
Related metabolites can be grouped into sets in many ways, e.g., by their participation in series of chemical reactions (forming metabolic pathways), or based on fragmentation spectral similarities or shared chemical substructures. Understanding how such metabolite sets change in relation to experimental factors can be incredibly useful in the interpretation and understanding of complex metabolomics data sets. However, many of the available tools that are used to perform this analysis are not entirely suitable for the analysis of untargeted metabolomics measurements. Here, we present PALS (Pathway Activity Level Scoring), a Python library, command line tool, and Web application that performs the ranking of significantly changing metabolite sets over different experimental conditions. The main algorithm in PALS is based on the pathway level analysis of gene expression (PLAGE) factorisation method and is denoted as mPLAGE (PLAGE for metabolomics). As an example of an application, PALS is used to analyse metabolites grouped as metabolic pathways and by shared tandem mass spectrometry fragmentation patterns. A comparison of mPLAGE with two other commonly used methods (overrepresentation analysis (ORA) and gene set enrichment analysis (GSEA)) is also given and reveals that mPLAGE is more robust to missing features and noisy data than the alternatives. As further examples, PALS is also applied to human African trypanosomiasis, Rhamnaceae, and American Gut Project data. In addition, normalisation can have a significant impact on pathway analysis results, and PALS offers a framework to further investigate this. PALS is freely available from our project Web site.
APA, Harvard, Vancouver, ISO, and other styles
46

Prasanna, Shivika, Naveen Premnath, Suveen Angraal, Ramy Sedhom, Rohan Khera, Helen Parsons, Syed Hussaini, et al. "Sentiment analysis of tweets on prior authorization." Journal of Clinical Oncology 39, no. 28_suppl (October 1, 2021): 322. http://dx.doi.org/10.1200/jco.2020.39.28_suppl.322.

Full text
Abstract:
322 Background: Natural language processing (NLP) algorithms can be leveraged to better understand prevailing themes in healthcare conversations. Sentiment analysis, an NLP technique to analyze and interpret sentiments from text, has been validated on Twitter in tracking natural disasters and disease outbreaks. To establish its role in healthcare discourse, we sought to explore the feasibility and accuracy of sentiment analysis on Twitter posts (‘’tweets’’) related to prior authorizations (PAs), a common occurrence in oncology built to curb payer-concerns about costs of cancer care, but which can obstruct timely and appropriate care and increase administrative burden and clinician frustration. Methods: We identified tweets related to PAs between 03/09/2021-04/29/2021 using pre-specified keywords [e.g., #priorauth etc.] and used Twarc, a command-line tool and Python library for archiving Twitter JavaScript Object Notation data. We performed sentiment analysis using two NLP models: (1) TextBlob (trained on movie reviews); and (2) VADER (trained on social media). These models provide results as polarity, a score between 0-1, and a sentiment as ‘’positive’’ (>0), ‘’neutral’’ (exactly 0), or ‘’negative’’ (<0). We (AG, NP) manually reviewed all tweets to give the ground truth (human interpretation of reality) including a notation for sarcasm since models are not trained to detect sarcasm. We calculated the precision (positive predictive value), recall (sensitivity), and the F1-Score (measure of accuracy, range 0-1, 0=failure, 1=perfect) for the models vs. the ground truth. Results: After preprocessing, 964 tweets (mean 137/ week) met our inclusion criteria for sentiment analysis. The two existing NLP models labeled 42.4%- 43.3% tweets as positive, as compared to the ground truth (5.6% tweets positive). F-1 scores of models across labels ranged from 0.18-0.54. We noted sarcasm in 2.8% of tweets. Detailed results in Table. Conclusions: We demonstrate the feasibility of performing sentiment analysis on a topic of high interest within clinical oncology and the deficiency of existing NLP models to capture sentiment within oncologic Twitter discourse. Ongoing iterations of this work further train these models through better identification of the tweeter (patient vs. health care worker) and other analytics from shared content.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
47

Martí-Gómez, Carlos, Enrique Lara-Pezzi, and Fátima Sánchez-Cabo. "dSreg: a Bayesian model to integrate changes in splicing and RNA-binding protein activity." Bioinformatics 36, no. 7 (December 13, 2019): 2134–41. http://dx.doi.org/10.1093/bioinformatics/btz915.

Full text
Abstract:
Abstract Motivation Alternative splicing (AS) is an important mechanism in the generation of transcript diversity across mammals. AS patterns are dynamically regulated during development and in response to environmental changes. Defects or perturbations in its regulation may lead to cancer or neurological disorders, among other pathological conditions. The regulatory mechanisms controlling AS in a given biological context are typically inferred using a two-step framework: differential AS analysis followed by enrichment methods. These strategies require setting rather arbitrary thresholds and are prone to error propagation along the analysis. Results To overcome these limitations, we propose dSreg, a Bayesian model that integrates RNA-seq with data from regulatory features, e.g. binding sites of RNA-binding proteins. dSreg identifies the key underlying regulators controlling AS changes and quantifies their activity while simultaneously estimating the changes in exon inclusion rates. dSreg increased both the sensitivity and the specificity of the identified AS changes in simulated data, even at low read coverage. dSreg also showed improved performance when analyzing a collection of knock-down RNA-binding proteins’ experiments from ENCODE, as opposed to traditional enrichment methods, such as over-representation analysis and gene set enrichment analysis. dSreg opens the possibility to integrate a large amount of readily available RNA-seq datasets at low coverage for AS analysis and allows more cost-effective RNA-seq experiments. Availability and implementation dSreg was implemented in python using stan and is freely available to the community at https://bitbucket.org/cmartiga/dsreg. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
48

Light, Ben, Peta Mitchell, and Patrik Wikström. "Big Data, Method and the Ethics of Location: A Case Study of a Hookup App for Men Who Have Sex with Men." Social Media + Society 4, no. 2 (April 2018): 205630511876829. http://dx.doi.org/10.1177/2056305118768299.

Full text
Abstract:
With the rise of geo-social media, location is emerging as a particularly sensitive data point for big data and digital media research. To explore this area, we reflect on our ethics for a study in which we analyze data generated via an app that facilitates public sex among men who have sex with men. The ethical sensitivities around location are further heightened in the context of research into such digital sexual cultures. Public sexual cultures involving men who have sex with men operate both in spaces “meant” for public sex (e.g., gay saunas and dark rooms) and spaces “not meant” for public sex (e.g., shopping centers and public toilets). The app in question facilitates this activity. We developed a web scraper that carefully collected selected data from the app and that data were then analyzed to help identify ethical issues. We used a mixture of content analysis using Python scripts, geovisualisation software and manual qualitative coding techniques. Our findings, which are methodological rather than theoretical in nature, center on the ethics associated with generating, processing, presenting, archiving and deleting big data in a context where harassment, imprisonment, physical harm and even death occur. We find a tension in normal standards of ethical conduct where humans are involved in research. We found that location came to the fore as a key—though not the only—actor requiring attention when considering ethics in a big data context.
APA, Harvard, Vancouver, ISO, and other styles
49

Thanaviratananich, Sikawat, Hao Cheng, Naricha Chirakalwasan, and Sirimon Reutrakul. "477 Association between Nocturnal Hypoxemic Burden and Glucose Metabolism." Sleep 44, Supplement_2 (May 1, 2021): A188. http://dx.doi.org/10.1093/sleep/zsab072.476.

Full text
Abstract:
Abstract Introduction To evaluate the association between a novel integrated event-based and hypoxemia-based parameter of polysomnography (PSG), hypoxemic load or HL100, and fasting blood glucose (FBG) and hemoglobin A1c (HbA1c) levels. Methods Adult patients, who underwent an in-lab PSG at University of Iowa Hospitals and Clinics with FBG or HbA1c levels were included. Event-based parameter and hypoxemia-based parameter data were derived. HL100, defined as an integrated area of desaturation under 100% oxygen saturation curve during the total sleep time divided by the total sleep time, was calculated by Python software version 3.8.5. Demographic data and glycemic parameters within 1 year prior to PSG(FBG and HbA1c) were retrieved from chart review Spearman correlation analysis and stepwise backward regression analysis were performed to determine independent predictors of FBG and HbA1c levels. Results Of the 467 patients underwent an in-lab PSG, 385 had FBG levels and 239 had HbA1c levels. All event-based and hypoxemia-based parameter; including HL100, were significantly correlated to FBG and HbA1c levels. Stepwise backward regression analyses, adjusting for age, sex, body mass index and diabetes status, revealed that log HL100 was significantly related to FBG (B=20.8, p=0.015), and log oxygen desaturation index was found related to HbA1c levels (B=0.273, p=0.037). Other parameters (e.g. apnea hypopnea index, minimum oxygen saturation) were not independently associated with glycemic parameters. Conclusion HL100 showed a significant positive correlation with FBG and HbA1c levels and only log HL100 was an independent predictor for FBG levels. This might imply that any degree of desaturation below 100% could result in adverse glucose metabolism. HL100 might be useful for interpretation of sleep studies, risk stratification and patient management purposes in the future. Support (if any):
APA, Harvard, Vancouver, ISO, and other styles
50

Baynova, Maria S., and Andrey M. Sokolov. "Tools for automated collection and analysis of sociological information on the territorial identity of city residents." Journal Of Applied Informatics 16, no. 92 (April 30, 2021): 92–102. http://dx.doi.org/10.37791/2687-0649-2021-16-2-92-102.

Full text
Abstract:
The paper proposes an algorithm for automated search and initial analysis of sociological information aimed at studying the territorial identity of city area residents using Internet sources. Communities of social networks, e.g. VKontakte, are considered as the main data source, and websites of topographic objects found in the territories under study are used as auxiliary information sources. It is demonstrated that, in terms of information support, public pages and groups with open or restricted access walls have the greatest potential. The developed algorithm implies selecting relevant groups, finding content concerning area issues, and determining the indices of community activity in discussing territorial problems. The required information is retrieved through the interaction with a social network server with the use of the official Application Programming Interface (API). To identify communities and posts, it is proposed to apply methods of morphological analysis of textual information. The software implementation of the algorithm is described in Python 3.8.5, including original functions for the acquisition of data on communities by their identification numbers, for the formation of a set of urbanonyms for a specified area, and some other ones. The developed program has been used to analyze territorial groups in three areas of Moscow; the results of the analysis enable us to estimate the degree of the territorial identity of their residents. The analysis of the error in the results of automated data collection and processing shows good agreement of these results with manually obtained ones, i.e. the error is 2.6% in the identification of relevant groups and about 3% in the identification of posts on area issues. Therewith, a much higher speed of response and lower labor effort required to perform routine operations allow the algorithm and the implementing computer program to be viewed as an effective tool for sociological research based on data from social networks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography