To see the other types of publications on this topic, follow the link: Pipelined Data Converters.

Journal articles on the topic 'Pipelined Data Converters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Pipelined Data Converters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

BARRA, SAMIR, ABDELGHANI DENDOUGA, SOUHIL KOUDA, and NOUR-EDDINE BOUGUECHAL. "CONTRIBUTION TO THE ANALYSIS AND MODELING OF THE NON-IDEAL EFFECTS OF PIPELINED ADCs USING MATLAB." Journal of Circuits, Systems and Computers 22, no. 02 (February 2013): 1250085. http://dx.doi.org/10.1142/s0218126612500855.

Full text
Abstract:
The present work analyses the non-ideal effects of pipelined analog-to-digital converters (ADCs), also sometimes referred to as pipeline ADCs, including the non-ideal effects in operational amplifiers (op-amps or OAs), switches and sampling circuits. We study these nonlinear effects in pipelined ADCs built using CMOS technology and switched-capacitor (SC) techniques. The proposed improved model of a pipelined ADC includes most of the non-idealities which affect its performance. This model, simulated using MATLAB, can determine the basic blocks specifications that allow the designer to meet given data converter requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Bo, Xin Li, Jie Sun, and Jianhui Wu. "Modeling of High-Resolution Data Converter: Two-Step Pipelined-SAR ADC based on ISDM." Electronics 9, no. 1 (January 10, 2020): 137. http://dx.doi.org/10.3390/electronics9010137.

Full text
Abstract:
The features of high-resolution and high-bandwidth are in an increasing demand considering to the wide range application fields based on high performance data converters. In this paper, a modeling of high-resolution hybrid analog-to-digital converter (ADC) is proposed to meet those requirements, and a 16-bit two-step pipelined successive approximation register (SAR) analog-to-digital converter (ADC) with first-order continuous-time incremental sigma-delta modulator (ISDM) assisted is presented to verify this modeling. The combination of high-bandwidth two-step pipelined-SAR ADC with low noise ISDM and background comparator offset calibration can achieve higher signal-to-noise ratio (SNR) without sacrificing the speed and plenty of hardware. The usage of a sub-ranging scheme consists of a coarse SAR ADC followed by an fine ISDM, can not only provide better suppression of the noise added in 2nd stage during conversion but also alleviate the demands of comparator’s resolution in both stages for a given power budget, compared with a conventional Pipelined-SAR ADC. At 1.2 V/1.8 V supply, 33.3 MS/s and 16 MHz input sinusoidal signal in the 40 nm complementary metal oxide semiconductor (CMOS) process, the post-layout simulation results show that the proposed hybrid ADC achieves a signal-to-noise distortion ratio (SNDR) and a spurious free dynamic range (SFDR) of 86.3 dB and 102.5 dBc respectively with a total power consumption of 19.2 mW.
APA, Harvard, Vancouver, ISO, and other styles
3

Rezapour, Arash, Farbod Setoudeh, and Mohammad Bagher Tavakoli. "Design an Improved Structure for 10-Bit Pipeline Analog to Digital Converter Based on 0.18µm CMOS Technology." Journal of Applied Engineering Sciences 9, no. 2 (December 1, 2019): 169–76. http://dx.doi.org/10.2478/jaes-2019-0023.

Full text
Abstract:
Abstract This paper proposed a novel structure of a 10-bit, 400MS/s pipelined analog to digital convertor using 0.18 µm TSMC technology. In this paper, two stages are used to converter design and a new method is proposed to increase the speed of the pipeline analog to digital convertor. For this purpose, the amplifier is not used at the first stage and the buffer is used for data transfer to the second stage, in the second stage an amplifier circuit with accurate gain of 8 that is open loop with a new structure was used to speed up, also the design is such that the first 4 bits are extracted simultaneously with sampling. On the other hand, in this structure, since in the first stage the information is not amplified and transferred to the second stage, the accuracy of the comparator circuit should be high, therefore a new structure is proposed to design a comparator circuit that can detect unwanted offsets and eliminate them without delay, and thus can detect the smallest differences in input voltage. The proposed analog to digital convertor was designed with a resolution of 10 bits and a speed of 400MS/s, with the total power consumption 74.3mW using power supply of 1.8v.
APA, Harvard, Vancouver, ISO, and other styles
4

Tsyganchuk, V. V., L. S. Shlapak, O. M. Matviienkiv, and P. Ya Sydor. "Remote monitoring of gas pipelines on the basis of magnetic elastic sensors of mechanical voltage." JOURNAL OF HYDROCARBON POWER ENGINEERING 6, no. 2 (December 30, 2019): 48–55. http://dx.doi.org/10.31471/2311-1399-2019-2(12)-48-55.

Full text
Abstract:
A method of controlling the stress state of gas pipelines with a four-pole magnetoanisotropic converter has been suggested. The Arduino hardware and software platform has been used to enhance the capabilities of the INI-1C basic mechanical stress measuring device. The system of remote monitoring for periodic measurements of pipeline voltages, accumulation and analysis of the received data has been described in order to provide objective information for making technological decisions.
APA, Harvard, Vancouver, ISO, and other styles
5

Ju, Haiyang, Xinhua Wang, and Yizhen Zhao. "Variational Specific Mode Extraction: A Novel Method for Defect Signal Detection of Ferromagnetic Pipeline." Algorithms 13, no. 4 (April 24, 2020): 105. http://dx.doi.org/10.3390/a13040105.

Full text
Abstract:
The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are easily submerged in wide-band background noise without external excitation sources. Previously, Variational Mode Decomposition (VMD) was used to separate modal components; however, VMD is based on narrow-band signal processing algorithm and the calculation is complex. In this article, a method of pipeline defect signal based on Variational Specific Mode Extraction (VSME) is employed to extract the signal of a specific central frequency by signal modal decomposition, i.e., the specific mode is weak magnetic anomaly signal of pipeline defects. VSME is based on the fact that a wide-band signal can be converted into a narrow-band signal by demodulation method. Furthermore, the problem of wide-band signal decomposition is expressed as an optimal demodulation problem, which can be solved by alternating direction method of multipliers. The proposed algorithm is verified by artificially synthesized signals, and its performance is better than that of VMD. The results showed that the VSME method can extract the magnetic anomaly signal of pipeline damage using experimental data, while obtaining a better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Shipman, R. F., S. F. Beaulieu, D. Teyssier, P. Morris, M. Rengel, C. McCoey, K. Edwards, et al. "Data processing pipeline for Herschel HIFI." Astronomy & Astrophysics 608 (December 2017): A49. http://dx.doi.org/10.1051/0004-6361/201731385.

Full text
Abstract:
Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims. The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods. A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results. The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions. Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness.
APA, Harvard, Vancouver, ISO, and other styles
7

Kostyuk, Yu, A. Tertishnik, and S. Nesterenko. "ENERGY SAVING TECHNOLOGIES IN THE IMPLEMENTATION OF CATHODIC PROTECTION OF PIPELINES AND TANKS." Municipal economy of cities 3, no. 163 (June 29, 2021): 109–16. http://dx.doi.org/10.33042/2522-1809-2021-3-163-109-116.

Full text
Abstract:
The data on the introduction of new energy-saving technologies of cathodic protection – the installation of magnetite ground electrodes, the use of new activators based on coke breeze, a pulse converter for automatic control of cathodic protection objects are considered. Practical results show that the use of magnetite anodes allows maintaining a high permissible current density, therefore, it is suitable for widespread use in various soils and seawater. The rate of dissolution of magnetite is 0.02 kg / (A • year). Magnetite anodes are also successfully used for the repair of GAZ wells (deep earthing conductors made of metal pipes). To perform this type of work, a typical project has been developed, which allows restoring the operability of deep anode grounding with minimal costs and without the use of expensive drilling operations. The use of activated coke breeze significantly reduces the transition resistance of the anode grounding. It has been practically proven that when using a coke-mineral activator, the transition resistance is significantly reduced due to an increase in the electrical conductivity of the filler at the anode space, the geometric dimensions and current of the diverting object increase, and the transition resistance of the anode - ground is stabilized. LLC "Elmet" has developed a pulse converter of automatic control IPAU designed to convert alternating current into rectified direct current with the possibility of automatic adjustments in several parameters. The basis of the converters is a high-frequency transistor inverter, developed on the basis of the latest achievements in power electronics. The use of stations of the IPAU type with telemetry allows to reduce labor costs for their maintenance in accordance with clause R.6.1 DSTU B V.2.5-29: 2006 Gas supply system. Underground steel gas pipelines and p. 8.9 of DSTU 4219: 2003 Steel main pipelines, which will make it possible to use the freed up personnel in other areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Suenaga, Tsuyoshi, Kentaro Takemura, Jun Takamatsu, and Tsukasa Ogasawara. "Data Communication Support for Reusability of RT-Components – Converter Classification and Prototype Supporting Tool –." Journal of Robotics and Mechatronics 24, no. 1 (February 20, 2012): 64–70. http://dx.doi.org/10.20965/jrm.2012.p0064.

Full text
Abstract:
In RT-Middleware, a data-centric communication pipeline between RT-Components called Data Port is designed for improving software reusability. OMG standardization such as the Robotic Localization Service is also promoted. However, the actual I/O specification is different in each developer. In this paper, we first enumerate all connection patterns of possible communication by adjusting protocols. We then propose a supporting tool for flexible data communication. The proposed tool generates source code based on information on the desired conversion. We actually implement the prototype tool of the automatic source code generator and evaluate it.
APA, Harvard, Vancouver, ISO, and other styles
9

Kroupin, Pavel, Victoria Kuznetsova, Dmitry Romanov, Alina Kocheshkova, Gennady Karlov, Thi Xuan Dang, Thi Mai L. Khuat, et al. "Pipeline for the Rapid Development of Cytogenetic Markers Using Genomic Data of Related Species." Genes 10, no. 2 (February 1, 2019): 113. http://dx.doi.org/10.3390/genes10020113.

Full text
Abstract:
Repetitive DNA including tandem repeats (TRs) is a significant part of most eukaryotic genomes. TRs include rapidly evolving satellite DNA (satDNA) that can be shared by closely related species, their abundance may be associated with evolutionary divergence, and they have been widely used for chromosome karyotyping using fluorescence in situ hybridization (FISH). The recent progress in the development of whole-genome sequencing and bioinformatics tools enables rapid and cost-effective searches for TRs including satDNA that can be converted into molecular cytogenetic markers. In the case of closely related taxa, the genome sequence of one species (donor) can be used as a base for the development of chromosome markers for related species or genomes (target). Here, we present a pipeline for rapid and high-throughput screening for new satDNA TRs in whole-genome sequencing of the donor genome and the development of chromosome markers based on them that can be applied in the target genome. One of the main peculiarities of the developed pipeline is that preliminary estimation of TR abundance using qPCR and ranking found TRs according to their copy number in the target genome; it facilitates the selection of the most prospective (most abundant) TRs that can be converted into cytogenetic markers. Another feature of our pipeline is the probe preparation for FISH using PCR with primers designed on the aligned TR unit sequences and the genomic DNA of a target species as a template that enables amplification of a whole pool of monomers inherent in the chromosomes of the target species. We demonstrate the efficiency of the developed pipeline by the example of FISH probes developed for A, B, and R subgenome chromosomes of hexaploid triticale (BBAARR) based on a bioinformatics analysis of the D genome of Aegilops tauschii (DD) whole-genome sequence. Our pipeline can be used to develop chromosome markers in closely related species for comparative cytogenetics in evolutionary and breeding studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Akrami, Y., F. Argüeso, M. Ashdown, J. Aumont, C. Baccigalupi, M. Ballardini, A. J. Banday, et al. "Planck2018 results." Astronomy & Astrophysics 641 (September 2020): A2. http://dx.doi.org/10.1051/0004-6361/201833293.

Full text
Abstract:
We present a final description of the data-processing pipeline for thePlanckLow Frequency Instrument (LFI), implemented for the 2018 data release. Several improvements have been made with respect to the previous release, especially in the calibration process and in the correction of instrumental features such as the effects of nonlinearity in the response of the analogue-to-digital converters. We provide a brief pedagogical introduction to the complete pipeline, as well as a detailed description of the important changes implemented. Self-consistency of the pipeline is demonstrated using dedicated simulations and null tests. We present the final version of the LFI full sky maps at 30, 44, and 70 GHz, both in temperature and polarization, together with a refined estimate of the solar dipole and a final assessment of the main LFI instrumental parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Shin, Dong Mun, Mi Yeong Hwang, Bong-Jo Kim, Keun Ho Ryu, and Young Jin Kim. "GEN2VCF: a converter for human genome imputation output format to VCF format." Genes & Genomics 42, no. 10 (August 16, 2020): 1163–68. http://dx.doi.org/10.1007/s13258-020-00982-0.

Full text
Abstract:
Abstract Background For a genome-wide association study in humans, genotype imputation is an essential analysis tool for improving association mapping power. When IMPUTE software is used for imputation analysis, an imputation output (GEN format) should be converted to variant call format (VCF) with imputed genotype dosage for association analysis. However, the conversion requires multiple software packages in a pipeline with a large amount of processing time. Objective We developed GEN2VCF, a fast and convenient GEN format to VCF conversion tool with dosage support. Methods The performance of GEN2VCF was compared to BCFtools, QCTOOL, and Oncofunco. The test data set was a 1 Mb GEN-formatted file of 5000 samples. To determine the performance of various sample sizes, tests were performed from 1000 to 5000 samples with a step size of 1000. Runtime and memory usage were used as performance measures. Results GEN2VCF showed drastically increased performances with respect to runtime and memory usage. Runtime and memory usage of GEN2VCF was at least 1.4- and 7.4-fold lower compared to other methods, respectively. Conclusions GEN2VCF provides users with efficient conversion from GEN format to VCF with the best-guessed genotype, genotype posterior probabilities, and genotype dosage, as well as great flexibility in implementation with other software packages in a pipeline.
APA, Harvard, Vancouver, ISO, and other styles
12

Nieto-Londoño, César, Carlos Andrés Bustamante-Chaverra, Jhon Anderson Buendía-García, Luz Angela Novoa, Joao Alexander García-Lázaro, and Geoffrey Viviescas-Ibarra. "Thermohydraulic modeling in transient state for evaluation of pipeline shutdown and restart procedures." CT&F - Ciencia, Tecnología y Futuro 9, no. 2 (November 11, 2019): 53–60. http://dx.doi.org/10.29047/01225383.179.

Full text
Abstract:
In order to study shutdown and re-start in heavy crude oil pipelines, a model was developed. It simulates, in a transient state, the behavior of pressure, flow and temperature variables, averaged over the cross-sectional area and as a function of time and the axial coordinate. The model was validated with actual operational data from a test case. Results obtained for different operating points, stopping time, crude properties, topographies and lengths are presented. Additionally, the governing equations are converted to dimensionless expressions in order to obtain the dimensionless numbers relevant to the re-start operation for crude oil pipelines.
APA, Harvard, Vancouver, ISO, and other styles
13

Bærentzen, Andreas, and Eva Rotenberg. "Skeletonization via Local Separators." ACM Transactions on Graphics 40, no. 5 (October 31, 2021): 1–18. http://dx.doi.org/10.1145/3459233.

Full text
Abstract:
We propose a new algorithm for curve skeleton computation that differs from previous algorithms by being based on the notion of local separators . The main benefits of this approach are that it is able to capture relatively fine details and that it works robustly on a range of shape representations. Specifically, our method works on shape representations that can be construed as spatially embedded graphs. Such representations include meshes, volumetric shapes, and graphs computed from point clouds. We describe a simple pipeline where geometric data are initially converted to a graph, optionally simplified, local separators are computed and selected, and finally a skeleton is constructed. We test our pipeline on polygonal meshes, volumetric shapes, and point clouds. Finally, we compare our results to other methods for skeletonization according to performance and quality.
APA, Harvard, Vancouver, ISO, and other styles
14

Wardhani, Veronica Indriati Sri, Henky Poedjo Rahardjo, and Rasito Tursinah. "ROUTING DESIGN ON THE PRIMARY COOLING PIPING SYSTEM IN PLATE-TYPE CONVERTED TRIGA 2000 REACTOR BANDUNG." JURNAL TEKNOLOGI REAKTOR NUKLIR TRI DASA MEGA 21, no. 3 (November 5, 2019): 107. http://dx.doi.org/10.17146/tdm.2019.21.3.5603.

Full text
Abstract:
In 2015, research activities to modify TRIGA 2000 Reactor Bandung fuel element from cylindrical to plate-type have been initiated. By using plate-type fuel elements, core cooling process will be altered due to different generated heat distribution. The direction of cooling flow is changed from bottom-to-top natural convection to top-to-bottom forced convection. This change of flow direction requires adjustment on the cooling piping system, in order to produce simple, economical, and safe piping route. This paper will discuss the design of suitable piping routing based on pipe stress and N-16 radioactivity. The design process was carried out in several stages which include thermal-hydraulic data of reactor core to determine the process variables, followed by modeling various pipeline routes. Based on available space and ease of manufacture, four possible alternative routings were determined. Four routings were produced and analyzed to minimize the amount of N-16 radioactivity on the surface of the reactor tank, prolonging the cooling fluid travel time to reach at least five times of N-16 half-life. Subsequent pipe stress analysis using CAESAR II software was conducted to ensure that the piping system will be able to withstand various loads such as working fluid load, pipe weight, along with working temperature and pressure. The results showed that the occurred stresses were still below the safety limit as required in ASME B31.1 Code, indicated that the designed and selected pipeline routing of primary cooling system in the Plate-type Converted TRIGA 2000 Reactor Bandung has met the safety standards.Keywords: TRIGA reactor, Cooling system modification, Pipeline routing design, Pipe stress analysis, N-16 radioactivity
APA, Harvard, Vancouver, ISO, and other styles
15

Atwood, Robert C., Andrew J. Bodey, Stephen W. T. Price, Mark Basham, and Michael Drakopoulos. "A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 373, no. 2043 (June 13, 2015): 20140398. http://dx.doi.org/10.1098/rsta.2014.0398.

Full text
Abstract:
Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu , a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an ‘orthogonal’ fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and ‘facility-independent’: it can run on standard cluster infrastructure at any institution.
APA, Harvard, Vancouver, ISO, and other styles
16

Bai, Hai Cheng, Hong Ji Meng, and Zhi Xie. "Development of an Embedded High-Temperature Field Measuring Instrument." Advanced Materials Research 508 (April 2012): 151–54. http://dx.doi.org/10.4028/www.scientific.net/amr.508.151.

Full text
Abstract:
This paper describes the development of an embedded high-temperature measuring instrument, which is composed of lens, photoelectric converter based on area-array CCD, and data acquisition, processing, Ethernet communication module based on DSP. The device is a creation of imaging spectrum, CCD imaging technology, digital image processing method and Ethernet communication together effectively. The advantages of this approach are: First, the networked measuring platform provides the possibility for process parameters optimization. Second, direct digital signal communication improves the anti-interference ability. Third, by employing 4-stage pipeline data processing mechanism greatly improves the real-time requirement. Fourth, through the constitution of application-layer protocol, the reliability of high-speed data transmission via Ethernet is guaranteed. The experiment by blackbody furnace in the laboratory shows that the maximum absolute error is 3.2 ºC, and the maximum relative error is 0.40%.
APA, Harvard, Vancouver, ISO, and other styles
17

Zong, Nansu, Andrew Wen, Daniel J. Stone, Deepak K. Sharma, Chen Wang, Yue Yu, Hongfang Liu, Qian Shi, and Guoqian Jiang. "Developing an FHIR-Based Computational Pipeline for Automatic Population of Case Report Forms for Colorectal Cancer Clinical Trials Using Electronic Health Records." JCO Clinical Cancer Informatics, no. 4 (September 2020): 201–9. http://dx.doi.org/10.1200/cci.19.00116.

Full text
Abstract:
PURPOSE The Fast Healthcare Interoperability Resources (FHIR) is emerging as a next-generation standards framework developed by HL7 for exchanging electronic health care data. The modeling capability of FHIR in standardizing cancer data has been gaining increasing attention by the cancer research informatics community. However, few studies have been conducted to examine the capability of FHIR in electronic data capture (EDC) applications for effective cancer clinical trials. The objective of this study was to design, develop, and evaluate an FHIR-based method that enables the automation of the case report forms (CRFs) population for cancer clinical trials using real-world electronic health records (EHRs). MATERIALS AND METHODS We developed an FHIR-based computational pipeline of EDC with a case study for modeling colorectal cancer trials. We first leveraged an existing FHIR-based cancer profile to represent EHR data of patients with colorectal cancer, and then we used the FHIR Questionnaire and QuestionnaireResponse resources to represent the CRFs and their data population. To test the accuracy of and overall quality of the computational pipeline, we used synoptic reports of 287 Mayo Clinic patients with colorectal cancer from 2013 to 2019 with standard measures of precision, recall, and F1 score. RESULTS Using the computational pipeline, a total of 1,037 synoptic reports were successfully converted as the instances of the FHIR-based cancer profile. The average accuracy for converting all data elements (excluding tumor perforation) of the cancer profile was 0.99, using 200 randomly selected records. The average F1 score for populating nine questions of the CRFs in a real-world colorectal cancer trial was 0.95, using 100 randomly selected records. CONCLUSION We demonstrated that it is feasible to populate CRFs with EHR data in an automated manner with satisfactory performance. The outcome of the study provides helpful insight into future directions in implementing FHIR-based EDC applications for modern cancer clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
18

Fiorella, David, Daniel Hsu, Henry H. Woo, Robert W. Tarr, and Peter Kim Nelson. "Very Late Thrombosis of a Pipeline Embolization Device Construct." Operative Neurosurgery 67, no. 3 (September 1, 2010): onsE313—onsE314. http://dx.doi.org/10.1227/01.neu.0000383875.08681.23.

Full text
Abstract:
Abstract BACKGROUND: The Pipeline embolization device (PED) is a new endoluminal construct designed to exclude aneurysms from the parent cerebrovasculature. We report the very late (>;1 year) thrombosis of a PED construct placed for the treatment of a left vertebral aneurysm. CLINICAL PRESENTATION: A patient with an occluded right vertebral artery and a large, fusiform intracranial left vertebral artery aneurysm was treated with PED and coil reconstruction. A durable, complete occlusion of the aneurysm was confirmed with control angiography at 1 year. The patient remained neurologically normal for 23 months until he experienced a transient visual disturbance followed weeks later by a minor brainstem stroke. INTERVENTION: Imaging evaluation showed thrombosis of the PED construct with complete occlusion of the left vertebral artery. After this stroke, he was initially treated with dual antiplatelet therapy and was then converted to warfarin. The patient remained neurologically stable for 5 months until he experienced progressive basilar thrombosis that ultimately resulted in a fatal stroke. CONCLUSION: The PED represents a promising new endovascular technology for the treatment of cerebral aneurysms; however, as an investigational device, long-term follow-up data are sparse at this point. The etiology of the very late thrombosis of the PED construct in this case remains unknown; however, this report underscores the need for a continued, careful systematic evaluation and close long-term follow-up of treated patients.
APA, Harvard, Vancouver, ISO, and other styles
19

Audu, Henry A. P., and U. Ukeme. "Geo-Spatial Information for the Location and Maintenance Management of Water Service Pipelines." Advanced Materials Research 824 (September 2013): 635–42. http://dx.doi.org/10.4028/www.scientific.net/amr.824.635.

Full text
Abstract:
This study examined the geo-spatial information needed for locating and maintaining water service pipelines. Geo-spatial information about the components of water service pipelines are a sine qua non for their location above, on or under the surface of the earth as well as the effective preventive maintenance of the entire water service pipelines. The Faculty of Engineering water service network in University of Benin, Benin City, was used as a case study. The geo-spatial data of the components of water service pipelines were acquired using Global Positioning System (GPS) receivers, while the attribute data were acquired using a standard checklist. Thereafter, the geo-spatial data (GPS data) were processed, and all the GPS data acquired in geographic co-ordinates were converted to plane rectangular co-ordinates, based on Minna Datum of Nigeria, using INCA GeoMATRIX software. The rectangular coordinates were exported into Autodesk Land development software, and vector model of the water service infrastructure in digital format was produced. A database was designed and created for geo-spatial and attribute information of various components of water service infrastructure (WSI). The study has revealed the locations of the burst pipes and the control valves that were malfunctioning, where the water pumped from the ground level storage reservoir, ooze out and caused artificial water scarcity in the Faculty of Engineering. The burst pipes were replaced with Unplasticised Polyvinyl Chloride (uPVC) pipes since they are nowadays most preferred for water supply piping. The control valves were also replaced with new ones. The GPS coordinates obtained in this study were used to determine the locations of the various components of water service infrastructure (WSI) in the study area. The geo-spatial and attribute information of hydraulic characteristics of the WSI has facilitated the planning, updating, operation and maintenance management of water service scheme in the study area. Furthermore, the study has provided a dynamic water service scheme of the Faculty of Engineering in digital format, which supersedes the existing, static water service scheme that is difficult to update, maintain and in analogue format. The dynamic water service scheme as well as the geo-spatial information of WSI can be updated and maintained regularly.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Yang, Yong Tie, Shun Na, and Dong Li. "Acoustic Signal Acquisition and Analysis System Based on Digital Signal Processor." Advanced Materials Research 518-523 (May 2012): 1436–41. http://dx.doi.org/10.4028/www.scientific.net/amr.518-523.1436.

Full text
Abstract:
Leak detection and calibration of pipe internal roughness in a water distribution network are significant issue for environment pollution around the world. In recent years the problem of leak detection in pipelines, tanks, and process vessels has been the focus of many man-hours of effort. Acquisition the acoustic signal around the leakage to determine the location and size of leaks is emerging as an important tool. A double acoustic data acquisition and signal processing system based on the TI digital signal processor TMS320VC5410 and analog to digital converter TLC320AD50C is presented in this paper. The system design is introduced, with emphasis on the digital signal processor minimal system and TMS320VC5410 interface circuit which consists of two chips of A/D TLC320AD50C. The software development of the data acquisition and signal analysis is introduced. The system can be used in the application of real-time acoustic signal acquisition and leak location.
APA, Harvard, Vancouver, ISO, and other styles
21

Kusuma, Hollanda Arief, Rady Purbakawaca, Irwan Rudy Pamungkas, Luthfy Nizarul Fikry, and Sonny Seftian Maulizar. "Design and Implementation of IoT-Based Water Pipe Pressure Monitoring Instrument." Jurnal Elektronika dan Telekomunikasi 21, no. 1 (August 31, 2021): 41. http://dx.doi.org/10.14203/jet.v21.41-44.

Full text
Abstract:
The water pressure monitoring system in the PDAM pipeline networks has been successfully developed for operation and maintenance of water leaks in a real-time manner. This research aims to design a water pressure monitoring system in operational piping networks to identify anomalies as early as possible. The system is built using a microcontroller, a 1.2 MPa fluid pressure sensor and a control system equipped with a GSM wireless communication module, an Analog to Digital Converter module with 16-bit resolution, a real-time clock peripheral, an OLED display 128x64, and a micro SD card. The developed system was tested in a pressure range of 0.200 - 0.800 bar with 30 repetitions with a RMSE of 0.058 bar. This system has a deterministic coefficient of 0.885 against a standard manometer. The system implemented in the field successfully sends data to the server with a success rate of 96.0%. Data is displayed on a monitoring dashboard that can be accessed via a computer or smartphone.
APA, Harvard, Vancouver, ISO, and other styles
22

Nenna, Vanessa, and Adam Pidlisecky. "The use of wavelet transforms for improved interpretation of airborne transient electromagnetic data." GEOPHYSICS 78, no. 3 (May 1, 2013): E117—E123. http://dx.doi.org/10.1190/geo2012-0363.1.

Full text
Abstract:
The continuous wavelet transform (CWT) is used to create maps of dominant spatial scales in airborne transient electromagnetic (ATEM) data sets to identify cultural noise and topographic features. The introduced approach is applied directly to ATEM data, and does not require the measurements be inverted, though it can easily be applied to an inverted model. For this survey, we apply the CWT spatially to B-field and dB/dt ATEM data collected in the Edmonton-Calgary Corridor of southern Alberta. The average wavelet power is binned over four ranges of spatial scale and converted to 2D maps of normalized power within each bin. The analysis of approximately 2 million soundings that make up the survey can be run on the order of minutes on a 2.4 GHz Intel processor. We perform the same CWT analysis on maps of surface and bedrock topography and also compare ATEM results to maps of infrastructure in the region. We find that linear features identified on power maps that differ significantly between B-field and dB/dt data are well correlated with a high density of infrastructure. Features that are well correlated with topography tend to be consistent in power maps for both types of data. For this data set, use of the CWT reveals that topographic features and cultural noise from high-pressure oil and gas pipelines affect a significant portion of the survey region. The identification of cultural noise and surface features in the raw ATEM data through CWT analysis provides a means of focusing and speeding processing prior to inversion, though the magnitude of this affect on ATEM signals is not assessed.
APA, Harvard, Vancouver, ISO, and other styles
23

Voltz, Thomas John, and Thomas Grischek. "Microturbines at Drinking Water Tanks Fed by Gravity Pipelines: A Method and Excel Tool for Maximizing Annual Energy Generation Based on Historical Tank Outflow Data." Water 11, no. 7 (July 9, 2019): 1403. http://dx.doi.org/10.3390/w11071403.

Full text
Abstract:
Wherever the flow of water in a gravity pipeline is regulated by a pressure control valve, hydraulic energy in the form of water pressure can instead be converted into useful mechanical and electrical energy via a turbine. Two classes of potential turbine sites exist—those with (class 1, “buffered”) and those without (class 2, “non-buffered”) a storage tank that decouples inflow from outflow, allowing the inflow regime to be modified to better suit turbine operation. A new method and Excel tool (freely downloadable, at no cost) were developed for determining the optimal hydraulic parameters of a turbine at class 1 sites that maximize annual energy generation. The method assumes a single microturbine with a narrow operating range and determines the optimal design flow rate based on the characteristic site curve and a historical time series of outflow data from the tank, simulating tank operation with a numerical model as it creates a new inflow regime. While no direct alternative methods could be found in the scientific literature or on the internet, three hypothetically applicable methods were gleaned from the German guidelines (published by the German Technical and Scientific Association for Gas and Water (DVGW)) and used as a basis of comparison. The tool and alternative methods were tested for nine sites in Germany.
APA, Harvard, Vancouver, ISO, and other styles
24

Saltanaeva, Elena Andreevna, and Andrey Vladimirovich Maister. "Optimization of calculations of the effects of spill fires during accidents on linear equipment." E3S Web of Conferences 140 (2019): 07002. http://dx.doi.org/10.1051/e3sconf/201914007002.

Full text
Abstract:
The issue of industrial (in particular, fire) safety at hazardous production facilities is considered. The previously obtained optimization method is given for calculating the assessment of the influence of hazardous factors in the event of possible accidents on extended (linear) equipment during explosions of fuel-air mixtures. Optimization is based on accelerating the calculations of the potential damage probability of Qn(M0) using transformed formulas containing a single instead of a double integral. The transformed formulas for calculating the double integral obtained in were used to optimize the calculations for the case of a spill fire based on the recommended probit function Pr used to estimate the damage to people by thermal radiation. As an example of calculations, a rectilinear fragment of a pipeline on a plane, presented as a segment of a straight line, is considered. To obtain an assessment of effectiveness, 1000 sets of source data characterizing various emergency situations were randomly generated. Based on the calculations for these data sets, statistical results are presented that characterize the effectiveness of the proposed optimization method: a graph of the values of the multiplicity of the reduction in calculation time using the converted formulas; average value and standard deviation of the multiplicity of reduction of calculation time; the maximum deviation of the values calculated by the original and converted formulas.
APA, Harvard, Vancouver, ISO, and other styles
25

Sullivan, Timothy. "2063. Using Twitter Data and Machine Learning to Identify Outpatient Antibiotic Misuse: A Proof-of-Concept Study." Open Forum Infectious Diseases 6, Supplement_2 (October 2019): S695. http://dx.doi.org/10.1093/ofid/ofz360.1743.

Full text
Abstract:
Abstract Background Outpatient antibiotic misuse is common, yet it is difficult to identify and prevent. Novel methods are needed to better identify unnecessary antibiotic use in the outpatient setting. Methods The Twitter developer platform was accessed to identify Tweets describing outpatient antibiotic use in the United States between November 2018 and March 2019. Unique English-language Tweets reporting recent antibiotic use were aggregated, reviewed, and labeled as describing possible misuse or not describing misuse. Possible misuse was defined as antibiotic use for a diagnosis or symptoms for which antibiotics are not indicated based on national guidelines, or the use of antibiotics without evaluation by a healthcare provider (Figure 1). Tweets were randomly divided into training and testing sets consisting of 80% and 20% of the data, respectively. Training set Tweets were preprocessed via a natural language processing pipeline, converted into numerical vectors, and used to generate a logistic regression algorithm to predict misuse in the testing set. Analyses were performed in Python using the scikit-learn and nltk libraries. Results 4000 Tweets were included, of which 1028 were labeled as describing possible outpatient antibiotic misuse. The algorithm correctly identified Tweets describing possible antibiotic misuse in the testing set with specificity = 94%, sensitivity = 55%, PPV = 75%, NPV = 87%, and area under the ROC curve = 0.91 (Figure 2). Conclusion A machine learning algorithm using Twitter data identified episodes of self-reported antibiotic misuse with good test performance, as defined by the area under the ROC curve. Analysis of Twitter data captured some episodes of antibiotic misuses, such as the use of non-prescribed antibiotics, that are not easily identified by other methods. This approach could be used to generate novel insights into the causes and extent of antibiotic misuse in the United States, and to monitor antibiotic misuse in real time. Disclosures All authors: No reported disclosures.
APA, Harvard, Vancouver, ISO, and other styles
26

Dietrich, Georg, Jonathan Krebs, Georg Fette, Maximilian Ertl, Mathias Kaspar, Stefan Störk, and Frank Puppe. "Ad Hoc Information Extraction for Clinical Data Warehouses." Methods of Information in Medicine 57, S 01 (May 2018): e22-e29. http://dx.doi.org/10.3414/me17-02-0010.

Full text
Abstract:
Summary Background: Clinical Data Warehouses (CDW) reuse Electronic health records (EHR) to make their data retrievable for research purposes or patient recruitment for clinical trials. However, much information are hidden in unstructured data like discharge letters. They can be preprocessed and converted to structured data via information extraction (IE), which is unfortunately a laborious task and therefore usually not available for most of the text data in CDW. Objectives: The goal of our work is to provide an ad hoc IE service that allows users to query text data ad hoc in a manner similar to querying structured data in a CDW. While search engines just return text snippets, our systems also returns frequencies (e.g. how many patients exist with “heart failure” including textual synonyms or how many patients have an LVEF < 45) based on the content of discharge letters or textual reports for special investigations like heart echo. Three subtasks are addressed: (1) To recognize and to exclude negations and their scopes, (2) to extract concepts, i.e. Boolean values and (3) to extract numerical values. Methods: We implemented an extended version of the NegEx-algorithm for German texts that detects negations and determines their scope. Furthermore, our document oriented CDW PaDaWaN was extended with query functions, e.g. context sensitive queries and regex queries, and an extraction mode for computing the frequencies for Boolean and numerical values. Results: Evaluations in chest X-ray reports and in discharge letters showed high F1-scores for the three subtasks: Detection of negated concepts in chest X-ray reports with an F1-score of 0.99 and in discharge letters with 0.97; of Boolean values in chest X-ray reports about 0.99, and of numerical values in chest X-ray reports and discharge letters also around 0.99 with the exception of the concept age. Discussion: The advantages of an ad hoc IE over a standard IE are the low development effort (just entering the concept with its variants), the promptness of the results and the adaptability by the user to his or her particular question. Disadvantage are usually lower accuracy and confidence.This ad hoc information extraction approach is novel and exceeds existing systems: Roogle [1] extracts predefined concepts from texts at preprocessing and makes them retrievable at runtime. Dr. Warehouse [2] applies negation detection and indexes the produced subtexts which include affirmed findings. Our approach combines negation detection and the extraction of concepts. But the extraction does not take place during preprocessing, but at runtime. That provides an ad hoc, dynamic, interactive and adjustable information extraction of random concepts and even their values on the fly at runtime. Conclusions: We developed an ad hoc information extraction query feature for Boolean and numerical values within a CDW with high recall and precision based on a pipeline that detects and removes negations and their scope in clinical texts.
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Chin-Teng, Chen-Yu Wang, Kuan-Chih Huang, Shi-Jinn Horng, and Lun-De Liao. "Wearable, Multimodal, Biosignal Acquisition System for Potential Critical and Emergency Applications." Emergency Medicine International 2021 (June 10, 2021): 1–10. http://dx.doi.org/10.1155/2021/9954669.

Full text
Abstract:
For emergency or intensive-care units (ICUs), patients with unclear consciousness or unstable hemodynamics often require aggressive monitoring by multiple monitors. Complicated pipelines or lines increase the burden on patients and inconvenience for medical personnel. Currently, many commercial devices provide related functionalities. However, most devices measure only one biological signal, which can increase the budget for users and cause difficulty in remote integration. In this study, we develop a wearable device that integrates electrocardiography (ECG), electroencephalography (EEG), and blood oxygen machines for medical applications with the hope that it can be applied in the future. We develop an integrated multiple-biosignal recording system based on a modular design. The developed system monitors and records EEG, ECG, and peripheral oxygen saturation (SpO2) signals for health purposes simultaneously in a single setting. We use a logic level converter to connect the developed EEG module (BR8), ECG module, and SpO2 module to a microcontroller (Arduino). The modular data are then smoothly encoded and decoded through consistent overhead byte stuffing (COBS). This developed system has passed simulation tests and exhibited proper functioning of all modules and subsystems. In the future, the functionalities of the proposed system can be expanded with additional modules to support various emergency or ICU applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Chatterjee, Aniruddha, Euan J. Rodger, Peter A. Stockwell, Robert J. Weeks, and Ian M. Morison. "Technical Considerations for Reduced Representation Bisulfite Sequencing with Multiplexed Libraries." Journal of Biomedicine and Biotechnology 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/741542.

Full text
Abstract:
Reduced representation bisulfite sequencing (RRBS), which couples bisulfite conversion and next generation sequencing, is an innovative method that specifically enriches genomic regions with a high density of potential methylation sites and enables investigation of DNA methylation at single-nucleotide resolution. Recent advances in the Illumina DNA sample preparation protocol and sequencing technology have vastly improved sequencing throughput capacity. Although the new Illumina technology is now widely used, the unique challenges associated with multiplexed RRBS libraries on this platform have not been previously described. We have made modifications to the RRBS library preparation protocol to sequence multiplexed libraries on a single flow cell lane of the Illumina HiSeq 2000. Furthermore, our analysis incorporates a bioinformatics pipeline specifically designed to process bisulfite-converted sequencing reads and evaluate the output and quality of the sequencing data generated from the multiplexed libraries. We obtained an average of 42 million paired-end reads per sample for each flow-cell lane, with a high unique mapping efficiency to the reference human genome. Here we provide a roadmap of modifications, strategies, and trouble shooting approaches we implemented to optimize sequencing of multiplexed libraries on an a RRBS background.
APA, Harvard, Vancouver, ISO, and other styles
29

Susanti, Vita, Agus Hartanto, Ridwan Arief Subekti, Hendri Maja Saputra, Estiko Rijanto, and Abdul Hapid. "Pengurangan Subsidi BBM dan Polusi Udara Melalui Kebijakan Program Konversi dari BBM ke BBG Untuk Kendaraan di Propinsi Jawa Barat." Journal of Mechatronics, Electrical Power, and Vehicular Technology 1, no. 2 (March 9, 2012): 43–52. http://dx.doi.org/10.14203/j.mev.2010.v1.43-52.

Full text
Abstract:
The number of vehicle that use oil (BBM) is increasing every year in Indonesia while national oil reserve become smaller, so that the oil should be imported. The impact of using oil are increasing subsidy and air pollution. Thus, it is now becoming important to replace oil with another environmentally friendly energy, one of them is gas (BBG). Based on the number of vehicle and infrastructure in gas pipeline, part of northern West Java potentially can be chosen for the implementation of conversion program to gas (BBG). The number of vehicle in potential regions such as Depok, Cibinong, Bogor, Bekasi, Cikarang, Karawang, Purwakarta, Cirebon, and Bandung are around 875,505 units. From these data, we simulated the potential profit to be gained each year by converting 10% for the first year and increasing it to 5% for every year. By investing 3.16 trillion for conversion, 14.9 trillion can be achieved in the form of fuel subsidy savings. In addition, emission reduction converted to a CDM (clean development mechanism) can become local revenues. Total CDM generated during 5 years predicted is of U.S $ 772,385. From this study, it can be concluded that converting oil (BBM) to gas (BBG) is highly beneficial.
APA, Harvard, Vancouver, ISO, and other styles
30

Winkler, Robert. "An evolving computational platform for biological mass spectrometry: workflows, statistics and data mining with MASSyPup64." PeerJ 3 (November 17, 2015): e1401. http://dx.doi.org/10.7717/peerj.1401.

Full text
Abstract:
In biological mass spectrometry, crude instrumental data need to be converted into meaningful theoretical models. Several data processing and data evaluation steps are required to come to the final results. These operations are often difficult to reproduce, because of too specific computing platforms. This effect, known as ‘workflow decay’, can be diminished by using a standardized informatic infrastructure. Thus, we compiled an integrated platform, which contains ready-to-use tools and workflows for mass spectrometry data analysis. Apart from general unit operations, such as peak picking and identification of proteins and metabolites, we put a strong emphasis on the statistical validation of results and Data Mining. MASSyPup64 includes e.g., the OpenMS/TOPPAS framework, the Trans-Proteomic-Pipeline programs, the ProteoWizard tools, X!Tandem, Comet and SpiderMass. The statistical computing language R is installed with packages for MS data analyses, such as XCMS/metaXCMS and MetabR. The R package Rattle provides a user-friendly access to multiple Data Mining methods. Further, we added the non-conventional spreadsheet program teapot for editing large data sets and a command line tool for transposing large matrices. Individual programs, console commands and modules can be integrated using the Workflow Management System (WMS) taverna. We explain the useful combination of the tools by practical examples: (1) A workflow for protein identification and validation, with subsequent Association Analysis of peptides, (2) Cluster analysis and Data Mining in targeted Metabolomics, and (3) Raw data processing, Data Mining and identification of metabolites in untargeted Metabolomics. Association Analyses reveal relationships between variables across different sample sets. We present its application for finding co-occurring peptides, which can be used for target proteomics, the discovery of alternative biomarkers and protein–protein interactions. Data Mining derived models displayed a higher robustness and accuracy for classifying sample groups in targeted Metabolomics than cluster analyses. Random Forest models do not only provide predictive models, which can be deployed for new data sets, but also the variable importance. We demonstrate that the later is especially useful for tracking down significant signals and affected pathways in untargeted Metabolomics. Thus, Random Forest modeling supports the unbiased search for relevant biological features in Metabolomics. Our results clearly manifest the importance of Data Mining methods to disclose non-obvious information in biological mass spectrometry . The application of a Workflow Management System and the integration of all required programs and data in a consistent platform makes the presented data analyses strategies reproducible for non-expert users. The simple remastering process and the Open Source licenses of MASSyPup64 (http://www. bioprocess.org/massypup/) enable the continuous improvement of the system.
APA, Harvard, Vancouver, ISO, and other styles
31

Pan, Chao-Yu, Wei-Ting Kuo, Chien-Yuan Chiu, and Wen-chang Lin. "Visual Display of 5p-arm and 3p-arm miRNA Expression with a Mobile Application." BioMed Research International 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/6037168.

Full text
Abstract:
MicroRNAs (miRNAs) play important roles in human cancers. In previous studies, we have demonstrated that both 5p-arm and 3p-arm of mature miRNAs could be expressed from the same precursor and we further interrogated the 5p-arm and 3p-arm miRNA expression with a comprehensive arm feature annotation list. To assist biologists to visualize the differential 5p-arm and 3p-arm miRNA expression patterns, we utilized a user-friendly mobile App to display. The Cancer Genome Atlas (TCGA) miRNA-Seq expression information. We have collected over 4,500 miRNA-Seq datasets from 15 TCGA cancer types and further processed them with the 5p-arm and 3p-arm annotation analysis pipeline. In order to be displayed with the RNA-Seq Viewer App, annotated 5p-arm and 3p-arm miRNA expression information and miRNA gene loci information were converted into SQLite tables. In this distinct application, for any given miRNA gene, 5p-arm miRNA is illustrated on the top of chromosome ideogram and 3p-arm miRNA is illustrated on the bottom of chromosome ideogram. Users can then easily interrogate the differentially 5p-arm/3p-arm expressed miRNAs with their mobile devices. This study demonstrates the feasibility and utility of RNA-Seq Viewer App in addition to mRNA-Seq data visualization.
APA, Harvard, Vancouver, ISO, and other styles
32

Sakamoto, Yoshitaka, Suzuko Zaha, Satoi Nagasawa, Shuhei Miyake, Yasuyuki Kojima, Ayako Suzuki, Yutaka Suzuki, and Masahide Seki. "Long-read whole-genome methylation patterning using enzymatic base conversion and nanopore sequencing." Nucleic Acids Research 49, no. 14 (May 21, 2021): e81-e81. http://dx.doi.org/10.1093/nar/gkab397.

Full text
Abstract:
Abstract Long-read whole-genome sequencing analysis of DNA methylation would provide useful information on the chromosomal context of gene expression regulation. Here we describe the development of a method that improves the read length generated by using the bisulfite-sequencing-based approach. In this method, we combined recently developed enzymatic base conversion, where an unmethylated cytosine (C) should be converted to thymine (T), with nanopore sequencing. After methylation-sensitive base conversion, the sequencing library was constructed using long-range polymerase chain reaction. This type of analysis is possible using a minimum of 1 ng genomic DNA, and an N50 read length of 3.4–7.6 kb is achieved. To analyze the produced data, which contained a substantial number of base mismatches due to sequence conversion and an inaccurate base read of the nanopore sequencing, a new analytical pipeline was constructed. To demonstrate the performance of long-read methylation sequencing, breast cancer cell lines and clinical specimens were subjected to analysis, which revealed the chromosomal methylation context of key cancer-related genes, allele-specific methylated genes, and repetitive or deletion regions. This method should convert the intractable specimens for which the amount of available genomic DNA is limited to the tractable targets.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Do-Yeop, and Ju-Yong Chang. "Attention-Based 3D Human Pose Sequence Refinement Network." Sensors 21, no. 13 (July 3, 2021): 4572. http://dx.doi.org/10.3390/s21134572.

Full text
Abstract:
Three-dimensional human mesh reconstruction from a single video has made much progress in recent years due to the advances in deep learning. However, previous methods still often reconstruct temporally noisy pose and mesh sequences given in-the-wild video data. To address this problem, we propose a human pose refinement network (HPR-Net) based on a non-local attention mechanism. The pipeline of the proposed framework consists of a weight-regression module, a weighted-averaging module, and a skinned multi-person linear (SMPL) module. First, the weight-regression module creates pose affinity weights from a 3D human pose sequence represented in a unit quaternion form. Next, the weighted-averaging module generates a refined 3D pose sequence by performing temporal weighted averaging using the generated affinity weights. Finally, the refined pose sequence is converted into a human mesh sequence using the SMPL module. HPR-Net is a simple but effective post-processing network that can substantially improve the accuracy and temporal smoothness of 3D human mesh sequences obtained from an input video by existing human mesh reconstruction methods. Our experiments show that the noisy results of the existing methods are consistently improved using the proposed method on various real datasets. Notably, our proposed method reduces the pose and acceleration errors of VIBE, the existing state-of-the-art human mesh reconstruction method, by 1.4% and 66.5%, respectively, on the 3DPW dataset.
APA, Harvard, Vancouver, ISO, and other styles
34

Lin, Yi, Xianlong Tan, Bo Yang, Kai Yang, Jianwei Zhang, and Jing Yu. "Real-time Controlling Dynamics Sensing in Air Traffic System." Sensors 19, no. 3 (February 7, 2019): 679. http://dx.doi.org/10.3390/s19030679.

Full text
Abstract:
In order to obtain real-time controlling dynamics in air traffic system, a framework is proposed to introduce and process air traffic control (ATC) speech via radiotelephony communication. An automatic speech recognition (ASR) and controlling instruction understanding (CIU)-based pipeline is designed to convert the ATC speech into ATC related elements, i.e., controlling intent and parameters. A correction procedure is also proposed to improve the reliability of the information obtained by the proposed framework. In the ASR model, acoustic model (AM), pronunciation model (PM), and phoneme- and word-based language model (LM) are proposed to unify multilingual ASR into one model. In this work, based on their tasks, the AM and PM are defined as speech recognition and machine translation problems respectively. Two-dimensional convolution and average-pooling layers are designed to solve special challenges of ASR in ATC. An encoder–decoder architecture-based neural network is proposed to translate phoneme labels into word labels, which achieves the purpose of ASR. In the CIU model, a recurrent neural network-based joint model is proposed to detect the controlling intent and label the controlling parameters, in which the two tasks are solved in one network to enhance the performance with each other based on ATC communication rules. The ATC speech is now converted into ATC related elements by the proposed ASR and CIU model. To further improve the accuracy of the sensing framework, a correction procedure is proposed to revise minor mistakes in ASR decoding results based on the flight information, such as flight plan, ADS-B. The proposed models are trained using real operating data and applied to a civil aviation airport in China to evaluate their performance. Experimental results show that the proposed framework can obtain real-time controlling dynamics with high performance, only 4% word-error rate. Meanwhile, the decoding efficiency can also meet the requirement of real-time applications, i.e., an average 0.147 real time factor. With the proposed framework and obtained traffic dynamics, current ATC applications can be accomplished with higher accuracy. In addition, the proposed ASR pipeline has high reusability, which allows us to apply it to other controlling scenes and languages with minor changes.
APA, Harvard, Vancouver, ISO, and other styles
35

Jeremiah Uriah Richard and Godwill Tamunobiekiri Pepple. "Selecting optimal site for solar photovoltaic plant in Ikwerre L.G.A., Rivers State, Nigeria." Global Journal of Engineering and Technology Advances 5, no. 2 (November 30, 2020): 071–85. http://dx.doi.org/10.30574/gjeta.2020.5.2.0102.

Full text
Abstract:
Erratic power supply is a serious problem to most part of Rivers State, Nigeria in general and Ikwerre Local Government Area in particular. This situation does not only halt social and economic development of the area but has also given birth to other social vices such as arm robbery, kidnapping, and other criminal activities. Renewable energy is an alternative form of energy aim at alleviating the problems of erratic power supply. It is generally considered as the cleanest form of energy. Solar photovoltaic is a type of renewable energy which derived its energy from the sun. The construction of solar plant requires the selection of suitable location for the generation of optimal energy. The purpose of the study is to determine suitable locations in Ikwerre Local Government Area, Rivers State to site solar photovoltaic plant using multi-criteria analysis (MCA) in ESRI’s ArcGIS. The dataset used for the determination of the optimal sites include; solar radiation and slope map produced from digital terrain model (DTM), pipeline, road network, land use/ cover map, soil map, and settlement. The datasets were converted to raster and reclassed into six classes for the purpose of data integration. The datasets were weighted according to their relative importance in the weighted overlay tool. Solar radiation has the highest percentage influence 40, followed by proximity to pipeline and road network which are 15 each. The model produced four suitability classes ranging from poorly suitable to highly suitable class. Highly suitable class has an area of 10139.87ha with 548 polygons, representing 15.78% of the study area. Further analysis was carried out using highly suitable class and settlement layer, it was found that three (3) optimal sites were obtained as most suitable for sitting solar plant. The three polygons were located in the region with very high solar radiation, accessible to road and away from built-up areas. The above results suggest the usefulness of GIS in site selection, particularly in sitting solar photovoltaic plant. It is recommended that further study should include transmission line which was completely omitted in this analysis due to inability to get the shapefile from the ministry of power.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yong Yong. "A Design Scheme of Device for Producing Fresh Water." Advanced Materials Research 912-914 (April 2014): 559–62. http://dx.doi.org/10.4028/www.scientific.net/amr.912-914.559.

Full text
Abstract:
This paper designed a new device for producing fresh water. In order to solve the problem of freshwater shortage in the offshore drilling platform or barren island, Firstly the device will change wave energy into gravitational potential energy of the floater. Then the gravitational potential energy of the floater is converted into mechanical energy by the intermediate transmission device. and mechanical energy is converted into pressure energy by the vacuum pump. The pressure of the system is reduced. The saturation temperature of water is further fall. Then the evaporation of seawater required heat is reduced. so the device has the effect of saving energy. Firstly, this paper established the mathematical model in the steady state. Through the analysis of the mathematical model, water production can meet the fresh water requirements of offshore drilling platforms, the device has the economic efficiency and good energy saving effect.. Research contents Firstly, through consulting data and sampling methods, we determine the amount of fresh water of offshore drilling platform on one day, and the minimum water production capacity should be determined; and determine the structural frame work platform sea area near the stable system and weather related natural factors (example: wind speed, wave height, wave cycle).The related natural factors in each season the working platform nearby the offshore drilling platform should be determined.(example: wind speed, wave height, wave cycle). Secondly, according to the analysis of survey results and data, we calculate the dilution tank capacity and the minimum pressure to bear; designing shape of floater , the determination of its size, the reasonable selection of materials, rational design of the wave shape, determine the size, screening materials; reasonable arrangements for the design of negative pressure fresh water conversion device installation. The reasonable installation of the device on the drilling platform Thirdly, we consider the low limit, tank capacity and the size of floater, and errors should be taken into account, then we can determine the working volume of vacuum pump, selecting materials, reasonable pipeline design; determining the size of solar panels and the calculation of the generation. Fourthly, we calculate the minimum driving force, the average working distance of float and power of the pump by the data Fifthly, we considering the various data, then testing weather design of the device is very reasonable through the reasonable calculation; And according to the minimum driving force and float force, we calculate parameters of intermediate transmission mechanism, the design efficient transmission device; by considering the proper placement of solar energy device. Fig. 1 is the research process. Fig. 1 the research process
APA, Harvard, Vancouver, ISO, and other styles
37

Nielsen, Tove, Anders Mathiesen, and Malene Bryde-Auken. "Base Quaternary in the Danish parts of the North Sea and Skagerrak." Geological Survey of Denmark and Greenland (GEUS) Bulletin 15 (July 10, 2008): 37–40. http://dx.doi.org/10.34194/geusb.v15.5038.

Full text
Abstract:
Over the years, several maps of the base Quaternary surface of the Danish area have been published. However, the maps have either been local in character (e.g. Håkansson & Pedersen 1992; Huuse et al. 2001) or have concentrated on special topics such as tunnel valleys (e.g. Huuse & Lykke-Andersen 2000) or glaciotectonic features (e.g. Klint & Pedersen 1995; And- ersen et al. 2005). The only published map of a more regional character is that of Binzer & Stockmarr (1994) that covers onshore Denmark and eastern Danish waters. Here we present for the first time a regional map of the base Quaternary surface for the entire Danish sector of the North Sea and Skagerrak based on interpretations of reflection seismic data at the Geological Survey of Denmark and Greenland (GEUS) (Fig. 1). The new map has been depth-converted and merged withthe onshore map of Binzer & Stockmarr (1994) and thus the first map covering the entire Danish land and sea areas has been compiled. The definition of the base Quaternary is a current issue of debate. In this article, we follow Gradstein et al. (2004) who place the base Quaternary at base Gelasian, which is dated to 2.59 Ma. In parts of the studied area, glacial tectonic features in the form of thrust complexes can be seen on the seismic data. Here the base Quaternary surface has been placed at the base of the dislocated thrust units, corresponding to the basal décollement horizon. The base Quaternary surface is of both academic and practical interest. The depth to the base Quaternary surface and its morphology are of interest to the understanding of the Quaternary development of the region, but are also important in relation to offshore constructions such as oil and gas platforms, pipelines and wind mills.
APA, Harvard, Vancouver, ISO, and other styles
38

Ilyakhinskii, A. V., V. M. Rodyushkin, D. A. Ryabov, A. A. Khlybov, and V. I. Erofeev. "STUDY ACOUSTIC EMISSION SIGNALS AT TENSION STEEL 20." Problems of strenght and plasticity 83, no. 2 (2021): 188–97. http://dx.doi.org/10.32326/1814-9146-2021-83-2-188-197.

Full text
Abstract:
An investigation was made of acoustic emission signals during uniaxial tensile testing of flat specimens of steel 20 used for parts of welded structures with a large volume of welding, as well as pipelines, collectors and other parts operating at temperatures from –40 to 450 °C under pressure. Tensile testing with simultaneous registration of acoustic emission was carried out on a universal testing machine manufactured by Tinius OIlsen Ltd, model H100KU, at a movement speed of the active gripper of 0.05 meters per minute. Registration of AE signals was carried out using wideband GT350 sensors from GlobalTest and an analog-to-digital converter NationalInstruments 6363X with subsequent storage of the registration results in the form of a time series in the computer memory. A comparative analysis of the amplitude distribution of the AE signal for the area of the yield area and the area of destruction was carried out according to the value of information entropy, fractal dimension, and self-organization parameter. It was found that the parameter of self-organization of the amplitude distribution of the signal is the most informative in describing the processes associated with acoustic emission. As additional information, it is advisable to use data on the structure of the self-organization parameter. The results obtained indicate the possibility of using the statistical model of the Dirichlet distribution as a model of processes associated with the appearance of acoustic emission signals from sources of incipient and developing defects during routine tests of products made of structural carbon high-quality steels with a pearlite-ferrite structure. The paper presents a version of the model and modeling algorithms for FE-modeling corrosion cracking processes in structural elements loaded by pressure and exposed to aggressive corrosion media. To assess the effectiveness of the present models and algorithms, the failure process of a thin-walled tubular specimen partly submerged into a chlorine-containing liquid and loaded by axial tension is numerically modeled.
APA, Harvard, Vancouver, ISO, and other styles
39

Shcherbakova, D. V., and O. E. Ignashin. "Innovations in Housing and Communal Services to Increase Real Incomes of the Russian Population." Administrative Consulting, no. 5 (July 23, 2021): 146–57. http://dx.doi.org/10.22394/1726-1139-2021-5-146-157.

Full text
Abstract:
The article analyzes innovative technologies for heating residential buildings as an opportunity to solve the problems of housing and communal services in the country and a way to increase the real income of the population. The methods of statistical data analysis, the logical method, and the method of mathematical modeling are used. The problem under study is that the Russian economy has been experiencing a decline in real incomes for a long period of time. Coronavirus restrictions have exacerbated the existing trends. At the same time, a significant share of the expenses of Russians is the payment for utilities. Over the past 10 years, the cost of heating has risen by 80%. The lag of the Russian energy sector from the world indicators is due to several reasons: the low energy efficiency class of houses, significant wear and tear of heating networks, functional shortcomings of centralized heating, the lack of necessary federal and regional legislative acts, and the lack of private investment. At the same time, the centralized heating system has a number of unresolved problems related to the monopoly position of the industry: significant wear of pipelines of heat networks and heat generating equipment; limitations of the maximum temperature in the cold period; systematic exceeding of the value according to the temperature schedule in the warm period; poor quality of hot water supply in the winter period; late start of heating in the early cold; obsolete and extremely dangerous method of testing heat networks. It becomes obvious that there is a need for a radical modernization of the housing and communal sector with the introduction of fundamentally new heating systems and the development of energysaving technologies in the design, construction, and major repairs of residential buildings. The economic calculation of the use of an “Electro-converter heating system” on the example of a panel house of the 507 series showed that the annual savings in heating costs will be 79.5%. The payback period of the project is 11 years. The use of innovative systems of this type in the construction of new homes will pay off the investment much faster. The most acceptable mechanism for implementing such a project may be a public-private partnership. The use of public-private partnerships in the form of concession agreements in the construction of energy-efficient residential buildings and the introduction of innovative heating systems will create favorable conditions for the large-scale introduction of energy-saving technologies, which will have a positive impact on cost savings when paying for heating services and increase real incomes of the population.
APA, Harvard, Vancouver, ISO, and other styles
40

Carpenter, Chris. "Remote Monitoring Digitizes Asset-Integrity Management." Journal of Petroleum Technology 73, no. 01 (January 1, 2021): 65–66. http://dx.doi.org/10.2118/0121-0065-jpt.

Full text
Abstract:
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 197168, “Digitalize Asset-Integrity Management by Remote Monitoring,” by Mohamed Sahid, ADNOC, prepared for the 2019 Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, 11-14 November. The paper has not been peer reviewed. Monitoring of corrosion in process pipelines has always been of paramount importance in ensuring plant-asset integrity. Similarly, steam traps play an important role in ensuring steam quality and, thus, the integrity of critical assets in the plant. The complete paper discusses these two aspects of monitoring asset integrity - real-time corrosion monitoring and real-time steam-trap monitoring - as implemented by the operator. The authors highlight the importance of digitization by means of implementing wireless technology and making data available in remote work stations in real time. Real-Time Corrosion-Monitoring System Corrosion coupons and electrical resistance probes are among the most-tried and -tested methods to monitor corrosion, but the authors detail shortcomings of these systems, focusing their efforts on the option of using nonintrusive ultrasonic sensors for corrosion monitoring. Fixed ultrasonic thickness (UT) monitoring systems measure a localized thickness of vessel wall or pipe through the use of sound waves. They are the fastest method to measure wall thickness and wall loss reliably. The wall thickness is calculated from the reflection of the ultrasonic signal at both external and internal surfaces. UT systems normally include a transducer and a pulser/receiver. The type of transducer used for this application is the ultrasonic transducer, which can be either piezoelectric or variable-capacitive. The pulser generates short electric pulses of energy at a constant rate, which are converted by the transducer into short, high-frequency ultrasonic sound pulses. These pulses are then directed into the material. Any discontinuation or impurity in the path of the ultrasonic sound wave will be reflected and received by the transducer, transformed into an electric signal, and amplified by the receiver to be projected onto the display (in the case of portable UT instruments). Depending on the intensity shown on the display, information about the impurity or discontinuity, such as size, orientation, and location, can be derived accurately. The shortcomings of using portable UT sensors have been overcome by the introduction of permanent UT sensors, which provide wall-thickness measurement continuously at one location in real time. Because these sensors remain fixed at one location for years, it is possible to analyze corrosion at a single point over time, thus detecting early corrosion onset. Real-Time UT Gauging. The operator installed the real-time corrosion-monitoring system in its offshore associated gas (OAG) unit. A UK-based vendor provided UT sensors along with data-management and -viewing software to support data interpretation. Twenty locations were identified in various plants of the OAG unit on the basis of criticality and previously recorded corrosion levels.
APA, Harvard, Vancouver, ISO, and other styles
41

Fornasier, S., V. H. Hoang, P. H. Hasselmann, C. Feller, M. A. Barucci, J. D. P. Deshapriya, H. Sierks, et al. "Linking surface morphology, composition, and activity on the nucleus of 67P/Churyumov-Gerasimenko." Astronomy & Astrophysics 630 (September 20, 2019): A7. http://dx.doi.org/10.1051/0004-6361/201833803.

Full text
Abstract:
Aims. The Rosetta space probe accompanied comet 67P/Churyumov-Gerasimenko for more than two years, obtaining an unprecedented amount of unique data of the comet nucleus and inner coma. This has enabled us to study its activity almost continuously from 4 au inbound to 3.6 au outbound, including the perihelion passage at 1.24 au. This work focuses identifying the source regions of faint jets and outbursts and on studying the spectrophotometric properties of some outbursts. We use observations acquired with the OSIRIS/NAC camera during July–October 2015, that is, close to perihelion. Methods. We analyzed more than 2000 images from NAC color sequences acquired with 7–11 filters covering the 250–1000 nm wavelength range. The OSIRIS images were processed with the OSIRIS standard pipeline up to level 3, that is, converted in radiance factor, then corrected for the illumination conditions. For each color sequence, color cubes were produced by stacking registered and illumination-corrected images. Results. More than 200 jets of different intensities were identified directly on the nucleus. Some of the more intense outbursts appear spectrally bluer than the comet dark terrain in the visible-to-near-infrared region. We attribute this spectral behavior to icy grains mixed with the ejected dust. Some of the jets have an extremely short lifetime. They appear on the cometary surface during the color sequence observations, and vanish in less than some few minutes after reaching their peak. We also report a resolved dust plume observed in May 2016 at a resolution of 55 cm pixel−1, which allowed us to estimate an optical depth of ~0.65 and an ejected mass of ~2200 kg, assuming a grain bulk density of ~800 kg m−3. We present the results on the location, duration, and colors of active sources on the nucleus of 67P from the medium-resolution (i.e., 6–10 m pixel−1) images acquired close to perihelion passage. The observed jets are mainly located close to boundaries between different morphological regions. Some of these active areas were observed and investigated at higher resolution (up to a few decimeter per pixel) during the last months of operations of the Rosetta mission. Conclusions. These observations allow us to investigate the link between morphology, composition, and activity of cometary nuclei. Jets depart not only from cliffs, but also from smooth and dust-covered areas, from fractures, pits, or cavities that cast shadows and favor the recondensation of volatiles. This study shows that faint jets or outbursts continuously contribute to the cometary activity close to perihelion passage, and that these events are triggered byillumination conditions. Faint jets or outbursts are not associated with a particular terrain type or morphology.
APA, Harvard, Vancouver, ISO, and other styles
42

Walker, Brian A., Mehmet K. Samur, Konstantinos Mavrommatis, Cody Ashby, Christopher P. Wardell, Maria Ortiz, Fadi Towfic, et al. "The Multiple Myeloma Genome Project: Development of a Molecular Segmentation Strategy for the Clinical Classification of Multiple Myeloma." Blood 128, no. 22 (December 2, 2016): 196. http://dx.doi.org/10.1182/blood.v128.22.196.196.

Full text
Abstract:
Abstract Introduction Segmenting multiple myeloma (MM) into subgroups with distinct pathogenesis and clinical behavior is important in order to move forward with advancements in therapy and implement a targeted therapy approach. Current technologies have elucidated five major translocation groups, which have a varying effect on prognosis: t(4;14), t(6;14), t(11;14), t(14;16) and t(14;20) along with recurrent copy number changes including deletion of CDKN2C (1p32.3) and TP53 (17p13.1) as well as gain or amplification of 1q21. However, minor translocation and mutational groups are poorly described because sample numbers are limited in small datasets. The availability of multiple sets of high quality mutation data associated with clinical outcomes has provided a unique opportunity in MM whereby clustering mutational data with chromosomal aberrations in the context of gene expression we can develop a molecular classification system to segment the disease into therapeutically meaningful subgroups. The Multiple Myeloma Genome Project (MGP) is a global collaborative initiative that aims to develop a molecular segmentation strategy for MM to develop clinically relevant tests that could improve diagnosis, prognosis, and treatment of patients with MM. Materials and methods We have established a set of 2161 patients for which whole exome sequencing (WES; n=1436), Whole Genome Sequencing (WGS; n=708), targeted panel sequencing (n=993) and expression data from RNA-Seq and Gene Expression arrays (n=1497) were available. These data were derived from the Myeloma XI trial (UK), Intergroupe Francophone du Myeloma/Dana-Faber Cancer Institute (MA), The Myeloma Institute (AR) and the Multiple Myeloma Research Foundation (IA1 - IA8). We assembled all data on a secure site and analyzed it using a streamlined and consistent pipeline using state of the art tools. First, BAM were converted to FASTQ using Picard tools v2.1.1 to extract read sequences and base quality scores. Next, all reads were realigned to the human genome assembly hg19 using BWA-mem. Duplicate marking and sorting was performed using Picard tools v2.1.1. For QAQC we use FASTQC and Picard tools. We identified somatic single nucleotide variants and indels with Mutect2 using default parameters. Translocations and large chromosomal aberrations were identified using MANTA and breakdancer and inferred copy number abnormalities and homozygous deletions using Sequenza v2.1.2 and ControlFreeC. Results We have begun to integrate these diverse large genomic datasets with various correlates. Samples were stratified by RNA-seq expression values and WES/WGS to identify the main cytogenetic groups with high concordance. In addition to the main translocation groups, translocations into MAFA, t(8;14), were detected in 1.2% of samples by both RNA-seq and WES/WGS. RNA-seq also detected fusion transcripts, including the known Ig-WHSC1 transcript in t(4;14). However, a proportion of identified in-frame fusion genes involved kinase domains consistent with activation of the Ras/MAPK pathway, which may be clinical targets for therapy. The main recurrent mutations included KRAS and NRAS, and negative regulators of the NF-κB pathway. In addition we identified recurrent copy number abnormalities and examined the interaction of these with mutations. This highlighted the interaction of the recurrent changes at 1p, 13q, and 17p with mutation of genes located within these regions, specifically indicating bi-allelic inactivation of CDKN2C, RB1 and TP53. Using WGS and RNA-Seq data we identified recurrent translocations and fusion genes that can be used to instruct therapy. Based on these data and the presence of homogeneous inactivation of key tumor expressed genes we will present clinically relevant clusters of MM that can form the basis of future risk and molecular targeted trials. Interaction of mutation with expression patterns has identified distinct expression signatures associated with mutational groups. Conclusions We have established the largest repository of molecular profiling data in MM along with associated clinical outcome data. Integrated analyses of these are enabling generation of clinically meaningful disease segments associated with differing risk. The MGP intends to build a global network by expanding collaboration with leading MM centers around the world and incorporating additional datasets through current and new collaborations. Disclosures Mavrommatis: Discitis DX: Membership on an entity's Board of Directors or advisory committees; Celgene Corporation: Employment, Equity Ownership. Ashby:University of Arkansas for Medical Sciences: Employment. Ortiz:Celgene: Employment. Towfic:Celgene: Employment, Equity Ownership; Immuneering Corp: Equity Ownership. Amatangelo:Celgene: Employment, Equity Ownership. Yu:Celgene: Employment, Equity Ownership. Avet-Loiseau:celgene: Consultancy; janssen: Consultancy; sanofi: Consultancy; amgen: Consultancy. Jackson:Janssen: Consultancy, Honoraria, Speakers Bureau; Celgene: Consultancy, Honoraria, Other: Travel support, Research Funding, Speakers Bureau; MSD: Consultancy, Honoraria, Speakers Bureau; Roche: Consultancy, Honoraria, Speakers Bureau; Takeda: Consultancy, Honoraria, Other: Travel support, Research Funding, Speakers Bureau; Amgen: Consultancy, Honoraria, Speakers Bureau. Thakurta:Celgene: Employment, Equity Ownership. Munshi:Takeda: Consultancy; Amgen: Consultancy; Janssen: Consultancy; Celgene: Consultancy; Merck: Consultancy; Pfizer: Consultancy; Oncopep: Patents & Royalties. Morgan:Univ of AR for Medical Sciences: Employment; Janssen: Research Funding; Celgene: Consultancy, Honoraria, Research Funding; Takeda: Consultancy, Honoraria; Bristol Meyers: Consultancy, Honoraria.
APA, Harvard, Vancouver, ISO, and other styles
43

Rahmat, Mahshid, Nicholas Haradhvala, Romanos Sklavenitis-Pistofidis, Jihye Park, Daisy Huynh, Mark Bustoros, Brianna Berrios, et al. "Dissecting the Epigenetic Landscape of Smoldering, Newly Diagnosed and Relapsed Multiple Myeloma Revealed IRAK3 As a Marker of Disease Progression." Blood 132, Supplement 1 (November 29, 2018): 3896. http://dx.doi.org/10.1182/blood-2018-99-115046.

Full text
Abstract:
Abstract Introduction. Multiple myeloma (MM) is a complex and heterogeneous malignancy of plasma cells that has two precursor states: monoclonal gammopathy of undetermined significance (MGUS) and smoldering multiple myeloma (SMM). MGUS and SMM are asymptomatic states that eventually give rise to overt MM, with some patients progressing, while others do not. Recent studies in MM pathobiology have highlighted epigenetic alterations that contribute to the onset, progression and heterogeneity of MM. Global hypomethylation of DNA, including tumor suppressor genes, and hypermethylation of B-cell specific enhancers, abnormal histone methylation patterns due to the overexpression of histone methyltransferases such as MMSET, and deregulation of non-coding RNAs along with mutations in different classes of chromatin modulators underline a potential for epigenetic biomarkers in disease prognosis and treatment. This study aimed to define epigenetic pathways that lead to the dynamic regulation of gene expression in MM pathogenesis. Methods. We performed ATAC-seq (Assay for Transposase-Accessible Chromatin using sequencing) and RNA-seq on 10 MM cell lines and CD138+ plasma cells isolated from bone marrow aspirates of 3 healthy donors, 9 SMM, 8 newly diagnosed MM (NDMM) and 9 relapsed (RRMM) patients. ATAC-seq reads were trimmed of adapters, aligned to hg19 using bowtie2, and filtered for mapping quality >=Q30 using the ENCODE ATAC-seq pipeline. Reads mapping to promoter regions, defined as -400 to +250 bases from a refseq transcription start site, were counted using bedtools for each sample. Promoter read counts were then normalized by the total number of reads in promoters in the sample, scaled to 1 million total reads, and converted to log10(x+1) space. Results. To characterize the epigenetic contribution to disease progression in MM, we first identified accessible promoter regions in normal plasma cells (NPC), SMM, NDMM and RRMM patients and found regions displaying differential accessibility in MM progression. Next, we intersected the list of differential accessible regions (DARs) with matched transcriptome data and observed two main clusters: genes with unaltered transcription profiles and genes in which the dynamics of open chromatin regions (OCRs) correlated with gene expression. Transcriptomic analysis revealed that a large portion of the differentially expressed (DE) genes in SMM remain DE in NDMM as compared to NPCs (882 genes out of 1642 and 1150 DE genes in SMM and NDMM, respectively). Those genes were significantly enriched for pathways like epithelial mesenchymal transition, cell cycle checkpoints and mitosis, KRAS signaling and interleukin-JAK-STAT pathways. To investigate the genes that behaved differently among the stages of disease, we looked at differential accessibility and expression in NDMM and SMM samples, and integrated them with Whole-Genome Bisulfite-Sequencing and 450K DNA-methylation data from MM patients and healthy donors (BLUEPRINT). This analysis led to the identification of novel genes in MM progression, such as the transcriptional repressor ZNF254 and IRAK3, a negative regulator of the TLR/IL1R signaling pathway. Although gene expression data for these genes showed comparable mRNA levels in SMM and NPCs, followed by a significant decrease in NDMM/ RRMM, ATAC-seq revealed a striking drop in promoter accessibility in SMM, NDMM and RRMM cases. Comparison of ATAC-seq peaks to DNA methylation and ChIP-seq data revealed that the altered OCR of IRAK3 is actually hypermethylated in MM patients and marked by H3K4me3, a marker of active promoters, in MM cell lines. Hypermethylation of IRAK3 has been described in hepatocellular carcinoma, where it is associated with poor prognosis. Together, our data suggest that the identified IRAK3 OCR may act as a bivalent domain that loses accessibility in the precursor states and gains DNA methylation in MM progression. Hence, IRAK3 methylation could be a novel prognostic marker in MM. Conclusion. We have generated a global epigenetic map of primary tumors from patients at the smoldering, newly diagnosed and relapsed/refractory stage of multiple myeloma. Integrative analysis of ATAC-seq data with DNA methylome, transcriptome and whole-genome map of active and repressive histone marks in our study led to the identification of IRAK3 as a novel epigenetic biomarker of disease progression. Disclosures Licht: Celgene: Research Funding. Ghobrial:Takeda: Consultancy; BMS: Consultancy; Celgene: Consultancy; Janssen: Consultancy.
APA, Harvard, Vancouver, ISO, and other styles
44

Guest, Erin, Byunggil Yoo, Rumen Kostadinov, Midhat S. Farooqi, Emily Farrow, Margaret Gibson, Neil Miller, Karina Shah, Tomi Pastinen, and Patrick A. Brown. "Single Cell Sequencing Reveals Heterogeneity of Gene Expression in KMT2A Rearranged Infant ALL at Relapse Compared to Diagnosis." Blood 134, Supplement_1 (November 13, 2019): 2756. http://dx.doi.org/10.1182/blood-2019-131999.

Full text
Abstract:
Introduction Infant acute lymphoblastic leukemia (ALL) with KMT2A rearrangement (KMT2A-r) is associated with a very poor prognosis. Disease free survival from the date of diagnosis is approximately 20% to 40%, depending on age, white blood cell count, and response to induction therapy. Refractory and relapsed infant ALL is often resistant to attempts at re-induction, and second remission is difficult to both achieve and maintain. Genomic sequencing studies of infant KMT2A-r ALL clinical samples have demonstrated an average of fewer than 3 additional non-silent somatic mutations per case at diagnosis, most commonly sub-clonal variants in RAS pathway genes. We previously reported relapse-associated gains in somatic variants associated with signaling, adhesion, and B-cell development pathways (Blood 2016 128:1735). We hypothesized that relapsed infant ALL is characterized by recurrent, altered patterns of gene expression. In this analysis, we utilized single cell RNA sequencing (scRNAseq) to identify candidate genes with differential expression in diagnostic vs. relapse leukemia specimens from 3 infants with KMT2A-r ALL. Methods Cryopreserved blood or bone marrow specimens from 3 infants enrolled in the Children's Oncology Group AALL0631 trial were selected for analysis. Samples from both diagnosis (DX) and relapse (RL) time points were thawed and checked for viability (>90% of cells viable) using trypan blue staining. Samples were multiplexed and processed for single cell RNA sequencing using the Chromium Single Cell 3' Library Kit (v2) and 10x Genomics Chromium controller per manufacturer's instructions (10x Genomics, Pleasanton, CA). Single cell libraries were converted to cDNA, amplified, and sequenced on an Illumina NovaSeq instrument. Two technical replicates were performed. Samples were de-multiplexed using genotype information acquired from previous whole exome sequencing (WES) and demuxlet software. Transcript alignment and counting were performed using the Cell Ranger pipeline (10x Genomics, default settings, Version 2.2.0, GRCh37 reference). Quality control, normalization, gene expression analysis, and unsupervised clustering were performed using the Seurat R package (Version 3.0). Dimensionality reduction and visualization were performed with the UMAP algorithm. Analyses were restricted to leukemia blasts with CD19 expression by scRNAseq. Results The clinical features for each case are shown in Table 1. Cells from the 3 infant ALL samples clustered together, distinct from cells of non-infant B-ALL, T-ALL, and mixed lineage acute leukemia biospecimens in the Children's Mercy scRNAseq database, but largely did not overlap with one another. For each of the 3 infant cases, cells from DX and RL time points could be distinguished by differential patterns of gene expression (Figure 1). Individual genes with statistically significant (p<0.05) log-fold change values were examined. Figure 2 summarizes the number of genes with up-regulation of expression by scRNAseq at RL compared to DX. Only 6 genes, DYNLL1, HMGB2, HMGN2, JUN, STMN1, and TUBA1B, were significantly increased at RL across all 3 cases. We repeated this analysis, restricting to leukemia blasts with CD79A expression, and identified these same 6 genes, and 4 additional genes: H2AFZ, NUCKS1, PRDX1, and TUBB, as consistently up-regulated in RL clusters. We examined the expression of candidate genes of interest, including clinically targetable genes, to compare the distribution of expression at DX and RL (Table 2). Conclusion Genomic factors underlying the aggressive, refractory clinical phenotype of relapsed infant ALL have yet to be defined. Each of these 3 cases demonstrates unique expression patterns at relapse, readily distinguishable from both the paired diagnostic sample and the other 2 relapse samples. Thus, scRNAseq is a powerful tool to identify heterogeneity in gene expression, with the potential to discover recurrent genomic drivers within resistant disease sub-clones. Ongoing analyses include scRNAseq in additional infant ALL samples, relative quantification of transcript expression in single cells, and comparison with bulk RNAseq data. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
45

Alam, AHM Zahirul. "Editorial." IIUM Engineering Journal 19, no. 1 (June 1, 2018): i—iv. http://dx.doi.org/10.31436/iiumej.v19i1.917.

Full text
Abstract:
IIUM ENGINEERING JOURNAL CHIEF EDITOR Ahmad Faris Ismail, IIUM, Malaysia TECHNICAL EDITOR Erry Yulian Triblas Adesta, IIUM, Malaysia EXECUTIVE EDITOR AHM Zahirul Alam, IIUM, Malaysia ASSOCIATE EDITOR Anis Nurashikin Nordin, IIUM, Malaysia LANGUAGE EDITOR Lynn Mason, Malaysia COPY EDITOR Hamzah Mohd. Salleh, IIUM, Malaysia EDITORIAL BOARD MEMBERS Abdullah Al-Mamun, IIUM, Malaysia Abdumalik Rakhimov, IIUM, Malaysia Amir Akramin Shafie, IIUM, Malaysia Erwin Sulaeman, IIUM, Malaysia Hanafy Omar, Saudi Arabia Hazleen Anuar, IIUM, Malaysia Konstantin Khanin, University of Toronto, Canada Ma'an Al-Khatib, IIUM, Malaysia Md Zahangir Alam, IIUM, Malaysia Meftah Hrairi, IIUM, Malaysia Mohamed B. Trabia, United States Mohammad S. Alam, Texas A&M University-Kingsville, United States Muataz Hazza Faizi Al Hazza, IIUM, Malaysia Mustafizur Rahman, National University Singapore, Singapore Nor Farahidah Binti Za'bah, IIUM, Malaysia Ossama Abdulkhalik, Michigan Technological University, United States Rosminazuin AB. Rahim, IIUM, Malaysia Waqar Asrar, IIUM, Malaysia AIMS & SCOPE OF IIUMENGINEERING JOURNAL The IIUM Engineering Journal, published biannually, is a carefully refereed international publication of International Islamic University Malaysia (IIUM). Contributions of high technical merit within the span of engineering disciplines; covering the main areas of engineering: Electrical and Computer Engineering; Mechanical and Manufacturing Engineering; Automation and Mechatronics Engineering; Material and Chemical Engineering; Environmental and Civil Engineering; Biotechnology and Bioengineering; Engineering Mathematics and Physics; and Computer Science and Information Technology are considered for publication in this journal. Contributions from other areas of Engineering and Applied Science are also welcomed. The IIUM Engineering Journal publishes contributions under Regular papers, Invited review papers, Short communications, Technical notes, and Letters to the editor (no page charge). Book reviews, reports of and/or call for papers of conferences, symposia and meetings, and advances in research equipment could also be published in IIUM Engineering Journal with minimum charges. REFEREES’ NETWORK All papers submitted to IIUM Engineering Journal will be subjected to a rigorous reviewing process through a worldwide network of specialized and competent referees. Each accepted paper should have at least two positive referees’ assessments. SUBMISSION OF A MANUSCRIPT <![if !vml]><![endif]>A manuscript should be submitted online to the IIUM-Engineering Journal website: http://journals.iium.edu.my/ejournal. Further correspondence on the status of the paper could be done through the journal website and the e-mail addresses of the Executive Editor: zahirulalam@iium.edu.my Faculty of Engineering, International Islamic University Malaysia (IIUM), Jan Gombak, 53100, Kuala Lumpur, Malaysia. Phone: (603) 6196 4529, Fax:(603) 6196 4488. INTERNATIONAL ADVISORY COMMITTEE A. Anwar, United States Abdul Latif Bin Ahmad, Malaysia Farzad Ismail, USM, Pulau Pinang, Malaysia Hanafy Omar, Saudi Arabia Hany Ammar, United States Idris Mohammed Bugaje, Nigeria K.B. Ramachandran, India Kunzu Abdella, Canada Luis Le Moyne, ISAT, University of Burgundy, France M Mujtaba, United Kingdom Mohamed AI-Rubei, Ireland Mohamed B Trabia, United States Mohammad S. Alam, Texas A&M University-Kingsville, United States Nazmul Karim Ossama Abdulkhalik, Michigan Technological University, United States Razi Nalim, IUPUI, Indianapolis, Indiana, United States Syed Kamrul Islam, United States Tibor Czigany, Budapest University of Technology and Economics, Hungary Yiu-Wing Mai, The University of Sydney, Australia. Published by: IIUM Press, International Islamic University Malaysia Jalan Gombak, 53100 Kuala Lumpur, Malaysia Phone (+603) 6196-5014, Fax: (+603) 6196-6298 Website: http://iiumpress.iium.edu.my/bookshop Whilst every effort is made by the publisher and editorial board to see that no inaccurate or misleading data, opinion or statement appears in this Journal, they wish to make it clear that the data and opinions appearing in the articles and advertisement herein are the responsibility of the contributor or advertiser concerned. Accordingly, the publisher and the editorial committee accept no liability whatsoever for the consequence of any such inaccurate or misleading data, opinion or statement. IIUM Engineering Journal ISSN: 1511-788X E-ISSN: 2289-7860 Volume 19, Issue 1, June 2018 https://doi.org/10.31436/iiumej.v19i1 Table of Content CHEMICAL AND BIOTECHNOLOGY ENGINEERING ADSORPTION OF HEAVY METALS AND RESIDUAL OIL FROM PALM OIL MILL EFFLUENT USING A NOVEL ADSORBENT OF ALGINATE AND MANGROVE COMPOSITE BEADS COATED WITH CHITOSAN IN A PACKED BED COLUMN... 1 Rana Jaafar Jawad, Mohd Halim Shah Ismail, Shamsul Izhar Siajam INVESTIGATION OF BIOFLOCCULANT AS DEWATERING AID IN SLUDGE TREATMENT........................................ 15 Mohammed Saedi Jami, Maizirwan Mel, Aysha Ralliya Mohd Ariff, Qabas Marwan Abdulazeez HYDROGEN PRODUCTION FROM ETHANOL DRY REFORMING OVER LANTHANIA-PROMOTED CO/AL2O3 CATALYST............................. 24 Fahim Fayaz, Nguyen Thi Anh Nga, Thong Le Minh Pham, Huong Thi Danh, Bawadi Abdullah, Herma Dina Setiabudi, Dai-Viet Nguyen Vo OPTIMIZATION OF RED PIGMENT PRODUCTION BY MONASCUS PURPUREUS FTC 5356 USING RESPONSE SURFACE METHODOLOGY......................................................... 34 Nor Farhana Hamid And Farhan Mohd Said PRODUCTION AND STABILITY OF MYCO-FLOCCULANTS FROM LENTINUS SQUARROSULUS RWF5 AND SIMPLICILLIUM OBCLAVATUM RWF6 FOR REDUCTION OF WATER TURBIDITY.............................................................................. 48 Nessa Jebun, Md. Zahangir Alam, Abdullah Al-Mamun, Raha Ahmad Raus ROLE OF SUBSTRATE BINDING ON THE PROTEIN DYNAMICS OF AN ENDOGLUCANASE FROM FUSARIUM OXYSPORUM AT DIFFERENT TEMPERATURES .............................................................307 Abdul Aziz Ahmad, Ibrahim Ali Noorbatcha, Hamzah Mohd. Salleh CIVIL AND ENVIRONMENTAL ENGINEERING DIMINISHING SEISMIC EFFECT ON BUILDINGS USING BEARING ISOLATION....................................................... 59 A. B. M. Saiful Islam ELECTRICAL, COMPUTER AND COMMUNICATIONS ENGINEERING A DISTRIBUTED ENERGY EFFICIENT CLUSTERING ALGORITHM FOR DATA AGGREGATION IN WIRELESS SENSOR NETWORKS.................................................................................. 72 Seyed Mohammad Bagher Musavi Shirazi, Maryam Sabet, Mohammad Reza Pajoohan POWER QUALITY IMPROVEMENT WITH CASCADED MULTILEVEL CONVERTER BASED STATCOM................. 91 Mahdi Heidari, Abdonnabi Kovsarian, S. Ghodratollah Seifossadat THE EFFECTS OF CABLE CHARACTERISTICS ON MAXIMUM OVERVOLTAGE IN COMBINED OVERHEAD/CABLE LINES PROTECTED BY SURGE ARRESTERS.............................................................................. 104 Reza Alizadeh, Mohammad Mirzaie SMART PORTABLE CRYOTHERAPY SYSTEM REPHRASED I.E. WITH CONTROLLED THERMOELECTRIC COOLING MODULES FOR MEDICAL APPLICATIONS................................................................................................ 117 Abbas Rahmani, Reza Hassanzadeh Pack Rezaee, Naser Kordani STATIC PIPELINE NETWORK PERFORMANCE OPTIMISATION USING DUAL INTERLEAVE ROUTING ALGORITHM 129 Siva Kumar Subramaniam1, Shariq Mahmood Khan, Anhar Titik, Rajagopal Nilavalan A MODIFIED MODEL BASED ON FLOWER POLLINATION ALGORITHM AND K-NEAREST NEIGHBOR FOR DIAGNOSING DISEASES........................................................................ 144 Mehdi Zekriyapanah Gashti A SINGLE LC TANK BASED ACTIVE VOLTAGE BALANCING CIRCUIT FOR BATTERY MANAGEMENT SYSTEM .158 A K M Ahasan Habib, S. M. A. Motakabber, Muhammad Ibn. Ibrahimy, A. H. M. Zahirul Alam ENGINEERING MATHEMATICS AND APPLIED SCIENCE ON THE CONTROL OF HEAT CONDUCTION.......................................... 168 Fayziev Yusuf Ergashevich MATERIALS AND MANUFACTURING ENGINEERING GREEN SYNTHESIS OF SILVER NANOPARTICLES USING SAGO (METROXYLON SAGU) VIA AUTOCLAVING METHOD......178 Aliyah Jamaludin, Che Ku Mohammad Faizal EFFECT OF ALKALINE TREATMENT ON PROPERTIES OF RATTAN WASTE AND FABRICATED BINDERLESS PARTICLEBOARD....185 Zuraida Ahmad, Maisarah Tajuddin, Nurul Farhana Fatin Salim, Zahurin Halim AMORPHOUS STRUCTURE IN CU-ZN-V-AL OXIDE COMPOSITE CATALYST FOR METHANOL REFORMING..... 197 Mohd Sabri Mahmud, Zahira Yaakob, Abu Bakar Mohamad, Wan Ramli Wan Daud, Vo Nguyen Dai Viet PERFORMANCE OF ELECTRICAL DISCHARGE MACHINING (EDM) WITH NICKEL ADDED DIELECTRIC FLUID....215 Ahsan Ali Khan, Muataz Hazza Faizi Al Hazza, A K M Mohiuddin, Nurfatihah Abdul Fattah, Mohd Radzi Che Daud ENVIRONMENTAL DEGRADATION OF DURIAN SKIN NANOFIBRE BIOCOMPOSITE.......................................... 233 Siti Nur E’zzati Mohd Apandi, Hazleen Anuar, Siti Munirah Salimah Abdul Rashid MECHANICAL AND AEROSPACE ENGINEERING A REVIEW ON RHEOLOGY OF NON-NEWTONIAN PROPERTIES OF BLOOD....................................................... 237 Esmaeel Fatahian, Naser Kordani, Hossein Fatahian NUMERICAL STUDY OF THERMAL CHARACTERISTICS OF FUEL OIL-ALUMINA AND WATER-.......................... 250 Hossein Fatahian, Hesamoddin Salarian, Majid Eshagh Nimvari, Esmaeel Fatahian A PARAMETRIC STUDY ON CONTROL OF FLOW SEPARATION OVER AN AIRFOIL IN INCOMPRESSIBLE REGIME....270 Lakshmanan Prabhu, Jonnalagadda Srinivas OPTIMIZATION OF BOX TYPE GIRDER WITH AND WITHOUT INDUSTRIAL CONSTRAINTS................................ 289 Muhammad Abid, Shahbaz Mahmood Khan, Hafiz Abdul Wajid
APA, Harvard, Vancouver, ISO, and other styles
46

Rezapour, Arash, Mohammad Bagher Tavakoli, and Farbod Setoudeh. "ANALYSIS AND DESIGN OF A NEW STRUCTURE FOR 10-BIT 350MS/S PIPELINE ANALOG TO DIGITAL CONVERTER." Gênero & Direito 8, no. 3 (August 30, 2019). http://dx.doi.org/10.22478/ufpb.2179-7137.2019v8n3.47576.

Full text
Abstract:
A 10-bit pipelined Analog to Digital converter is proposed in this paper with using 0.18 µm TSMC technology. In this paper, a new structure is proposed to increase the speed of the pipeline analog to digital convertor. So at the first stage is not used the amplifier and instead the buffer is used for data transfer to the second stage. The speed of this converter is 350MS/s. An amplifier circuit with accurate gain of 6 and a very accurate unit gain buffer circuit that are open loop with a new structure were. used. In this Converter, the first 3 bits are extracted simultaneously with sampling. The proposed analog-to-digital converter was designed with the total power consumption 75mW using power supply of 1.8v.
APA, Harvard, Vancouver, ISO, and other styles
47

Vaquerizo-Hdez, Daniel, Pablo Muñoz, David F. Barrero, and Maria D. R-Moreno. "Continuous energy consumption measure approach using a DMA double-buffering technique." EURASIP Journal on Wireless Communications and Networking 2021, no. 1 (August 28, 2021). http://dx.doi.org/10.1186/s13638-021-02043-w.

Full text
Abstract:
AbstractMeasuring the consumption of electronic devices is a difficult and sensitive task. Data acquisition (DAQ) systems are often used to determine such consumption. In theory, measuring energy consumption is straight forward, just by acquiring current and voltage signals we can determine the consumption. However, a number of issues arise when a fine analysis is required. The main problem is that sampling frequencies have to be high enough to detect variations in the assessed signals over time. In that regard, some popular DAQ systems are based on RISC ARM processors for microcontrollers combined with analog-to-digital converters to meet high-frequency acquisition requirements. The efficient use of direct memory access (DMA) modules combined with pipelined processing in a microcontroller allows to improve the sample rate overcoming the processing time and the internal communication protocol limitations. This paper presents a novel approach for high-frequency energy measurement composed of a DMA rate improvement (data acquisition logic), a data processing logic and a low-cost hardware. The contribution of the paper is the combination of a double-buffered signal acquisition mechanism and an algorithm that computes the device’s energy consumption using parallel data processing. The combination of these elements enables a high-frequency (continuous) energy consumption measurement of an electronic device, improving the accuracy and reducing the cost of existing systems. We have validated our approach by measuring the energy consumed by elemental circuits and wireless sensors networks (WSNs) motes. The results indicate that the energy measurement error is less than 5% and that the proposed method is suitable to measure WSN motes even during sleep cycles, enabling a better characterization of their consumption profile.
APA, Harvard, Vancouver, ISO, and other styles
48

Xin-Rong Qin, Yu-Bao Ma,. "Management ofCommunication System in Pipeline IndustryBased on OLP Technology." CONVERTER, July 10, 2021, 680–86. http://dx.doi.org/10.17762/converter.98.

Full text
Abstract:
In order to effectively improve the fault self-healing ability of pipeline communication system and reduce the events of communication interruption caused by fiber break or performance degradation of optical line, this paper introduces the basic principle, system composition working mode and technical requirements of OLP technology. The OLP system is firstly introduced into the relay section of long-distance pipeline to build the standby route with the spare optical fiber as material. When the in-service route is broken or the performance of optical fiber is reduced, the OLP system can automatically switch to the standby route within 50 ms to ensure the communication transmission service be uninterrupted, and even no error code alarm. Through engineering test in pipeline industry, the operability of OLP technology applied to the pipeline communication system is verified. Using the real-time monitoring data of optical cable through OLP system, we can analyze the evolution trend of optical cable line quality, precontrol in advance, and improve the level of communication management.
APA, Harvard, Vancouver, ISO, and other styles
49

Royse, Sarah K., Davneet S. Minhas, Brian J. Lopresti, Alice Murphy, Tyler Ward, Robert A. Koeppe, Santiago Bullich, Susan DeSanti, William J. Jagust, and Susan M. Landau. "Validation of amyloid PET positivity thresholds in centiloids: a multisite PET study approach." Alzheimer's Research & Therapy 13, no. 1 (May 10, 2021). http://dx.doi.org/10.1186/s13195-021-00836-1.

Full text
Abstract:
Abstract Background Inconsistent positivity thresholds, image analysis pipelines, and quantitative outcomes are key challenges of multisite studies using more than one β-amyloid (Aβ) radiotracer in positron emission tomography (PET). Variability related to these factors contributes to disagreement and lack of replicability in research and clinical trials. To address these problems and promote Aβ PET harmonization, we used [18F]florbetaben (FBB) and [18F]florbetapir (FBP) data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) to derive (1) standardized Centiloid (CL) transformations and (2) internally consistent positivity thresholds based on separate young control samples. Methods We analyzed Aβ PET data using a native-space, automated image processing pipeline that is used for PET quantification in many large, multisite AD studies and trials and made available to the research community. With this pipeline, we derived SUVR-to-CL transformations using the Global Alzheimer’s Association Interactive Network data; we used reference regions for cross-sectional (whole cerebellum) and longitudinal (subcortical white matter, brain stem, whole cerebellum) analyses. Finally, we developed a FBB positivity threshold using an independent young control sample (N=62) with methods parallel to our existing FBP positivity threshold and validated the FBB threshold using a data-driven approach in ADNI participants (N=295). Results The FBB threshold based on the young sample (1.08; 18 CL) was consistent with that of the data-driven approach (1.10; 21 CL), and the existing FBP threshold converted to CL with the derived transformation (1.11; 20 CL). The following equations can be used to convert whole cerebellum- (cross-sectional) and composite- (longitudinal) normalized FBB and FBP data quantified with the native-space pipeline to CL units: [18F]FBB: CLwhole cerebellum = 157.15 × SUVRFBB − 151.87; threshold=1.08, 18 CL [18F]FBP: CLwhole cerebellum = 188.22 × SUVRFBP − 189.16; threshold=1.11, 20 CL [18F]FBB: CLcomposite = 244.20 × SUVRFBB − 170.80 [18F]FBP: CLcomposite = 300.66 × SUVRFBP − 208.84 Conclusions FBB and FBP positivity thresholds derived from independent young control samples and quantified using an automated, native-space approach result in similar CL values. These findings are applicable to thousands of available and anticipated outcomes analyzed using this pipeline and shared with the scientific community. This work demonstrates the feasibility of harmonized PET acquisition and analysis in multisite PET studies and internal consistency of positivity thresholds in standardized units.
APA, Harvard, Vancouver, ISO, and other styles
50

Routier, Alexandre, Ninon Burgos, Mauricio Díaz, Michael Bacci, Simona Bottani, Omar El-Rifai, Sabrina Fontanella, et al. "Clinica: An Open-Source Software Platform for Reproducible Clinical Neuroscience Studies." Frontiers in Neuroinformatics 15 (August 13, 2021). http://dx.doi.org/10.3389/fninf.2021.689675.

Full text
Abstract:
We present Clinica (www.clinica.run), an open-source software platform designed to make clinical neuroscience studies easier and more reproducible. Clinica aims for researchers to (i) spend less time on data management and processing, (ii) perform reproducible evaluations of their methods, and (iii) easily share data and results within their institution and with external collaborators. The core of Clinica is a set of automatic pipelines for processing and analysis of multimodal neuroimaging data (currently, T1-weighted MRI, diffusion MRI, and PET data), as well as tools for statistics, machine learning, and deep learning. It relies on the brain imaging data structure (BIDS) for the organization of raw neuroimaging datasets and on established tools written by the community to build its pipelines. It also provides converters of public neuroimaging datasets to BIDS (currently ADNI, AIBL, OASIS, and NIFD). Processed data include image-valued scalar fields (e.g., tissue probability maps), meshes, surface-based scalar fields (e.g., cortical thickness maps), or scalar outputs (e.g., regional averages). These data follow the ClinicA Processed Structure (CAPS) format which shares the same philosophy as BIDS. Consistent organization of raw and processed neuroimaging files facilitates the execution of single pipelines and of sequences of pipelines, as well as the integration of processed data into statistics or machine learning frameworks. The target audience of Clinica is neuroscientists or clinicians conducting clinical neuroscience studies involving multimodal imaging, and researchers developing advanced machine learning algorithms applied to neuroimaging data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography