Academic literature on the topic 'Data processing pipeline'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data processing pipeline.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data processing pipeline"

1

Curcoll, R. Firpo, M. Delfino, C. Neissner, I. Reichardt, J. Rico, P. Tallada, and N. Tonello. "The MAGIC data processing pipeline." Journal of Physics: Conference Series 331, no. 3 (December 23, 2011): 032040. http://dx.doi.org/10.1088/1742-6596/331/3/032040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weilbacher, Peter M., Ralf Palsa, Ole Streicher, Roland Bacon, Tanya Urrutia, Lutz Wisotzki, Simon Conseil, et al. "The data processing pipeline for the MUSE instrument." Astronomy & Astrophysics 641 (September 2020): A28. http://dx.doi.org/10.1051/0004-6361/202037855.

Full text
Abstract:
The processing of raw data from modern astronomical instruments is often carried out nowadays using dedicated software, known as pipelines, largely run in automated operation. In this paper we describe the data reduction pipeline of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph operated at the ESO Paranal Observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software with a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data is transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Shen, Hong, and Nobuyoshi Numata. "Instruction Scheduling on a Pipelined Processor for Mechanical Measurements." Key Engineering Materials 381-382 (June 2008): 647–48. http://dx.doi.org/10.4028/www.scientific.net/kem.381-382.647.

Full text
Abstract:
Pipeline processing provides us an effective way to enhance processing speed with low hardware costs. However, pipeline hazards are obstacles to the smooth pipelined execution of instructions. This paper analyzes the pipeline hazards occur in a pipeline processor designed for data processing in mechanical measurements. Instruction scheduling and register renaming are performed to eliminate hazards. The simulation experiments are performed, and the effectiveness is confirmed.
APA, Harvard, Vancouver, ISO, and other styles
4

Leroy, Adam K., Annie Hughes, Daizhong Liu, Jérôme Pety, Erik Rosolowsky, Toshiki Saito, Eva Schinnerer, et al. "PHANGS–ALMA Data Processing and Pipeline." Astrophysical Journal Supplement Series 255, no. 1 (July 1, 2021): 19. http://dx.doi.org/10.3847/1538-4365/abec80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Andrews, Peter, Charles Baltay, Anne Bauer, Nancy Ellman, Jonathan Jerke, Rochelle Lauer, David Rabinowitz, and Julia Silge. "The QUEST Data Processing Software Pipeline." Publications of the Astronomical Society of the Pacific 120, no. 868 (June 2008): 703–14. http://dx.doi.org/10.1086/588828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zuo, S., J. Li, Y. Li, D. Santanu, A. Stebbins, K. W. Masui, R. Shaw, J. Zhang, F. Wu, and X. Chen. "Data processing pipeline for Tianlai experiment." Astronomy and Computing 34 (January 2021): 100439. http://dx.doi.org/10.1016/j.ascom.2020.100439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shipman, R. F., S. F. Beaulieu, D. Teyssier, P. Morris, M. Rengel, C. McCoey, K. Edwards, et al. "Data processing pipeline for Herschel HIFI." Astronomy & Astrophysics 608 (December 2017): A49. http://dx.doi.org/10.1051/0004-6361/201731385.

Full text
Abstract:
Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims. The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods. A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results. The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions. Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness.
APA, Harvard, Vancouver, ISO, and other styles
8

Brumer, Irène, Dominik F. Bauer, Lothar R. Schad, and Frank G. Zöllner. "Synthetic Arterial Spin Labeling MRI of the Kidneys for Evaluation of Data Processing Pipeline." Diagnostics 12, no. 8 (July 31, 2022): 1854. http://dx.doi.org/10.3390/diagnostics12081854.

Full text
Abstract:
Accurate quantification of perfusion is crucial for diagnosis and monitoring of kidney function. Arterial spin labeling (ASL), a completely non-invasive magnetic resonance imaging technique, is a promising method for this application. However, differences in acquisition (e.g., ASL parameters, readout) and processing (e.g., registration, segmentation) between studies impede the comparison of results. To alleviate challenges arising solely from differences in processing pipelines, synthetic data are of great value. In this work, synthetic renal ASL data were generated using body models from the XCAT phantom and perfusion was added using the general kinetic model. Our in-house developed processing pipeline was then evaluated in terms of registration, quantification, and segmentation using the synthetic data. Registration performance was evaluated qualitatively with line profiles and quantitatively with mean structural similarity index measures (MSSIMs). Perfusion values obtained from the pipeline were compared to the values assumed when generating the synthetic data. Segmentation masks obtained by semi-automated procedure of the processing pipeline were compared to the original XCAT organ masks using the Dice index. Overall, the pipeline evaluation yielded good results. After registration, line profiles were smoother and, on average, MSSIMs increased by 25%. Mean perfusion values for cortex and medulla were close to the assumed perfusion of 250 mL/100 g/min and 50 mL/100 g/min, respectively. Dice indices ranged 0.80–0.93, 0.78–0.89, and 0.64–0.84 for whole kidney, cortex, and medulla, respectively. The generation of synthetic ASL data allows flexible choice of parameters and the generated data are well suited for evaluation of processing pipelines.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Rongxin, Zongyue Wang, and Yuling Hong. "Pipelined XPath Query Based on Cost Optimization." Scientific Programming 2021 (May 27, 2021): 1–16. http://dx.doi.org/10.1155/2021/5559941.

Full text
Abstract:
XPath query is the key part of XML data processing, and its performance is usually critical for XML applications. In the process of XPath query, there is inherent seriality between query steps, which makes it difficult to parallelize the query effectively as a whole. On the other hand, although XPath query has the characteristics of data stream processing and is suitable for pipeline processing, the data flow of each query step usually varies a lot, which results in limited performance under multithreading conditions. In this paper, we propose a pipelined XPath query method (PXQ) based on cost optimization. This method uses pipelined query primitives to process query steps based on relation index. During pipeline construction, a cost estimation model based on XML statistics is proposed to estimate the cost of the query primitive and provide guidance for the creation of a pipeline phase through the partition of query primitive sequence. The pipeline construction technique makes full use of available worker threads and optimizes the load balance between pipeline stages. The experimental results show that our method can adapt to the multithreaded environment and stream processing scenarios of XPath query, and its performance is better than the existing typical query methods based on data parallelism.
APA, Harvard, Vancouver, ISO, and other styles
10

Alblehai, Fahad. "A Caching-Based Pipelining Model for Improving the Input/Output Performance of Distributed Data Storage Systems." Journal of Nanoelectronics and Optoelectronics 17, no. 6 (June 1, 2022): 946–57. http://dx.doi.org/10.1166/jno.2022.3269.

Full text
Abstract:
Distributed data storage requires swift input/output (I/O) processing features to prevent pipelines from balancing requests and responses. Unpredictable data streams and fetching intervals congest the data retrieval from distributed systems. To address this issue, in this article, a Coordinated Pipeline Caching Model (CPCM) is proposed. The proposed model distinguishes request and response pipelines for different intervals of time by reallocating them. The reallocation is performed using storage and service demand analysis; in the analysis, edge-assisted federated learning is utilized. The shared pipelining process is fetched from the connected edge devices to prevent input and output congestion. In pipeline allocation and storage management, the current data state and I/O responses are augmented by distributed edges. This prevents pipeline delays and aids storage optimization through replication mitigation. Therefore, the proposed model reduces the congestion rate (57.60%), replication ratio (59.90%), and waiting time (54.95%) and improves the response ratio (5.16%) and processing rate (74.25%) for different requests.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data processing pipeline"

1

Jakubiuk, Wiktor. "High performance data processing pipeline for connectome segmentation." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106122.

Full text
Abstract:
Thesis: M. Eng. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February 2016.
"December 2015." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-88).
By investigating neural connections, neuroscientists try to understand the brain and reconstruct its connectome. Automated connectome reconstruction from high resolution electron miscroscopy is a challenging problem, as all neurons and synapses in a volume have to be detected. A mm3 of a high-resolution brain tissue takes roughly a petabyte of space that the state-of-the-art pipelines are unable to process to date. A high-performance, fully automated image processing pipeline is proposed. Using a combination of image processing and machine learning algorithms (convolutional neural networks and random forests), the pipeline constructs a 3-dimensional connectome from 2-dimensional cross-sections of a mammal's brain. The proposed system achieves a low error rate (comparable with the state-of-the-art) and is capable of processing volumes of 100's of gigabytes in size. The main contributions of this thesis are multiple algorithmic techniques for 2- dimensional pixel classification of varying accuracy and speed trade-off, as well as a fast object segmentation algorithm. The majority of the system is parallelized for multi-core machines, and with minor additional modification is expected to work in a distributed setting.
by Wiktor Jakubiuk.
M. Eng. in Computer Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
2

Nakane, Takanori. "Data processing pipeline for serial femtosecond crystallography at SACLA." Kyoto University, 2017. http://hdl.handle.net/2433/217997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gu, Wenyu. "Improving the performance of stream processing pipeline for vehicle data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284547.

Full text
Abstract:
The growing amount of position-dependent data (containing both geo position data (i.e. latitude, longitude) and also vehicle/driver-related information) collected from sensors on vehicles poses a challenge to computer programs to process the aggregate amount of data from many vehicles. While handling this growing amount of data, the computer programs that process this data need to exhibit low latency and high throughput – as otherwise the value of the results of this processing will be reduced. As a solution, big data and cloud computing technologies have been widely adopted by industry. This thesis examines a cloud-based processing pipeline that processes vehicle location data. The system receives real-time vehicle data and processes the data in a streaming fashion. The goal is to improve the performance of this streaming pipeline, mainly with respect to latency and cost. The work began by looking at the current solution using AWS Kinesis and AWS Lambda. A benchmarking environment was created and used to measure the current system’s performance. Additionally, a literature study was conducted to find a processing framework that best meets both industrial and academic requirements. After a comparison, Flink was chosen as the new framework. A new solution was designed to use Fink. Next the performance of the current solution and the new Flink solution were compared using the same benchmarking environment and. The conclusion is that the new Flink solution has 86.2% lower latency while supporting triple the throughput of the current system at almost same cost.
Den växande mängden positionsberoende data (som innehåller både geo-positionsdata (dvs. latitud, longitud) och även fordons- / förarelaterad information) som samlats in från sensorer på fordon utgör en utmaning för datorprogram att bearbeta den totala mängden data från många fordon. Medan den här växande mängden data hanteras måste datorprogrammen som behandlar dessa datauppvisa låg latens och hög genomströmning - annars minskar värdet på resultaten av denna bearbetning. Som en lösning har big data och cloud computing-tekniker använts i stor utsträckning av industrin. Denna avhandling undersöker en molnbaserad bearbetningspipeline som bearbetar fordonsplatsdata. Systemet tar emot fordonsdata i realtid och behandlar data på ett strömmande sätt. Målet är att förbättra prestanda för denna strömmande pipeline, främst med avseende på latens och kostnad. Arbetet började med att titta på den nuvarande lösningen med AWS Kinesis och AWS Lambda. En benchmarking-miljö skapades och användes för att mäta det aktuella systemets prestanda. Dessutom genomfördes en litteraturstudie för att hitta en bearbetningsram som bäst uppfyller både industriella och akademiska krav. Efter en jämförelse valdes Flink som det nya ramverket. En nylösning designades för att använda Fink. Därefter jämfördes prestandan för den nuvarande lösningen och den nya Flink-lösningen med samma benchmarking-miljö och. Slutsatsen är att den nya Flink-lösningen har 86,2% lägre latens samtidigt som den stöder tredubbla kapaciteten för det nuvarande systemet till nästan samma kostnad.
APA, Harvard, Vancouver, ISO, and other styles
4

González, Alejandro. "A Swedish Natural Language Processing Pipeline For Building Knowledge Graphs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254363.

Full text
Abstract:
The concept of knowledge is proper only to the human being thanks to the faculty of understanding. The immaterial concepts, independent of the material causes of the experience constitute an evident proof of the existence of the rational soul that makes the human being a spiritual being "in a way independent of the material. Nowadays research efforts in the field of Artificial Intelligence are trying to mimic this human capacity using computers by means of tteachingthem how to read and understand human language using Machine Learning techniques related to the processing of human language. However, there are still a significant number of challenges such as how to represent this knowledge so can be used by a machine to infer conclusions or provide answers. This thesis presents a Natural Language Processing pipeline that is capable of building a knowledge representation of the information contained in Swedish human-generated text. The result is a system that, given Swedish text in its raw format, builds a representation in the form of a Knowledge Graph of the knowledge or information contained in that text.
Vetskapen om kunskap är den del av det som definierar den nutida människan (som vet, att hon vet). De immateriella begreppen oberoende av materiella attribut är en del av beviset på att människan en själslig varelse som till viss del är oberoende av materialet. För närvarande försöker forskningsinsatser inom artificiell intelligens efterlikna det mänskliga betandet med hjälp av datorer genom att "lära" dem hur man läser och förstår mänskligt språk genom att använda maskininlärningstekniker relaterade till behandling av mänskligt språk. Det finns emellertid fortfarande ett betydande antal utmaningar, till exempel hur man representerar denna kunskap så att den kan användas av en maskin för att dra slutsatser eller ge svar utifrån detta. Denna avhandling presenterar en studie i användningen av ”Natural Language Processing” i en pipeline som kan generera en kunskapsrepresentation av informationen utifrån det svenska språket som bas. Resultatet är ett system som, med svensk text i råformat, bygger en representation i form av en kunskapsgraf av kunskapen eller informationen i den texten.
APA, Harvard, Vancouver, ISO, and other styles
5

SHARMA, DIVYA. "APPLICATION OF ML TO MAKE SENCE OF BIOLOGICAL BIG DATA IN DRUG DISCOVERY PROCESS." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18378.

Full text
Abstract:
Scientists have been working over years to assemble and accumulate data from biological sources to find solutions for many principal questions. Since a tremendous amount of data has been collected over the past and still increasing at an exponential rate, hence it now becomes unachievable for a human being alone to handle or analyze this data. Most of the data collection and maintenance is now done in digitalized format and hence requires an organization to have better data management and analysis to convert the vast data resource into insights to achieve their objectives. The continuous explosion of information both from biomedical and healthcare sources calls for urgent solutions. Healthcare data needs to be closely combined with biomedical research data to make it more effective in providing personalized medicine and better treatment procedures. Therefore, big data analytics would help in integrating large data sets for proper management, decision-making, and cost- effectiveness in any medical/healthcare organization. The scope of the thesis is to highlight the need for big data analytics in healthcare, explain data processing pipeline, and machine learning used to analyze big data.
APA, Harvard, Vancouver, ISO, and other styles
6

Patuzzi, Ilaria. "16S rRNA gene sequencing sparse count matrices: a count data simulator and optimal pre-processing pipelines." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426369.

Full text
Abstract:
The study of microbial communities has deeply changed since it was firstly introduced in the 17th century. In the late 1970s, a breakthrough in the way bacterial communities were studied was brought by the discovery that ribosomal RNA (rRNA) genes could be used as molecular markers to perform organisms classification. Some decades later, the advent of DNA sequencing technology revolutionized the study of microbial communities, permitting a culture-independent view on the overall community contained within a sample. Today, one of the most widely used approaches for microbial communities profiling is based on the sequencing of the gene that codes for the 16S subunit of prokaryotic ribosome (16S rRNA gene), that being ubiquitous to all bacteria, but having an exact DNA sequence unique to each species, is used as a sort of molecular fingerprint for assigning to each community member a taxonomic characterization. The advent of Next-Generation Sequencing (NGS) platforms ensured 16S rRNA gene sequencing (16S rDNA-Seq) an increasing growth in election rate as preferred methodology to perform microbiome studies. Despite this, the continuous development of both experimental and computational procedures for 16S rDNA-Seq caused an unavoidable lack in standardization concerning sequencing output data treatment and analysis. This is further complicated by the very peculiar characteristics that distinguish the matrix in which samples information is summarized after sequencing. In fact, the instrumental limit on the maximum number of obtainable sequences makes 16S rDNA-Seq data compositional, i.e. they are data in which the detected abundance of each bacterial species is dependent from the level of presence of other populations in the sample. Additionally, 16S rDNA-Seq-derived matrices are typically highly sparse (70-95% of null values). These peculiarities make the commonly adopted loan of bulk RNA sequencing tools and approaches inappropriate for 16S rDNA-Seq count matrices analyses. In particular, unspecific pre-processing steps, such as normalization, risk to introduce biases in case of highly sparse matrices. The main objective of this thesis was to identify optimal pipelines that filled the above gaps in order to assure solid and reliable conclusions from 16S rRNA-Seq data analyses. Among all the analysis steps included in a typical pipeline, this project was focused on the pre-processing of count data matrices obtained from 16S rDNA-Seq experiments. This task was carried out through several steps. first, state of the art methods for 16S rDNA-Seq count data pre-processing were identified performing a thorough literature search, which revealed a minimal availability of specific tools and the complete lack in the usual 16S rDNA-Seq analysis pipeline of a pre-processing step in which the information loss due to sequencing is recovered (zero-imputation). At the same time, the literature search highlighted that no specific simulators were available to directly obtain synthetic 16S rDNA-Seq count data on which perform the analysis to identift optimal pre-processing pipelines. Thus, a 16S rDNA-Seq sparse count matrices simulator that considers the compositional nature of this data was developed. Then, a comprehensive benchmark analysis of forty-nine pre-processing pipelines was designed and performed to assess currently used and most-recen tpre-processing approaches performance and to test for appropriateness in including zero-imputation step into 16S rDNA-Seq analysis framework. Overall, this thesis considers the 16S rDNA-Seq data pre-processing problem and provide a useful guide for a robust data pre-processing when performing a 16S rDNA-Seq analysis. Additionally, the simulator proposed in this work could be a spur and valuable tool for researchers involved in developing and testing bioinformatics methods, thus helping in filling the lack of specific tools for 16S rDNA-Seq data.
Lo studio delle comunità microbiche è profondamente cambiato da quando fu per la prima volta proposto nel XVII secolo. Quando il ruolo fondamentale dei microbi nel regolare e causare malattie umane divenne evidente, i ricercatori iniziarono a sviluppare una varietà di tecniche per isolare e coltivare i batteri in laboratorio con l'obiettivo di caratterizzarli e classificarli. Alla fine degli anni '70, una svolta in come venivano studiate le comunità batteriche fu apportata dalla scoperta che i geni che codificano per l'RNA ribosomale (rRNA) potevano essere utilizzati come marcatori molecolari per la classificazione degli organismi. Alcuni decenni più tardi, l'avvento della tecnologia di sequenziamento del DNA ha rivoluzionato lo studio delle comunità microbiche, consentendo una visione complessiva coltura-indipendente della comunità contenuta in un campione. Oggi, uno degli approcci più diffusi per profilazione di comunità microbiche si basa sul sequenziamento del gene che codifica per la subunità 16S del ribosoma procariotico (gene dell'rRNA 16S). Poiché il ribosoma svolge un ruolo essenziale nella vita procariotica, esso è onnipresente in tutti i batteri, ma la sua esatta sequenza di DNA è unica per ogni specie. Per questo motivo, esso viene utilizzato come una sorta di impronta molecolare per assegnare a ciascun membro della comunità una caratterizzazione tassonomica. L'avvento delle piattaforme di Next Generation Sequencing (NGS), in grado di produrre un'enorme mole di dati riducendo tempi e costi, ha assicurato alla tecnica di sequenziamento del gene rRNA 16S (16S rDNA-Seq) una crescita nel tasso di elezione come metodologia preferita per eseguire studi sul microbioma. Nonostante ciò, il continuo sviluppo di procedure sia sperimentali che computazionali per 16S rDNA-Seq ha causato una inevitabile mancanza di standardizzazione riguardo al trattamento e all'analisi dei dati di sequenziamento. Ciò è ulteriormente complicato dalle caratteristiche molto peculiari che contraddistinguono la matrice in cui tipicamente le informazioni dei campioni sono riassunte dopo il sequenziamento. Infatti, il limite strumentale sul numero massimo di sequenze ottenibili rende i dati 16S rDNA-Seq composizionali, cioè dati in cui l'abbondanza rilevata di ogni specie batterica dipende dal livello di presenza di altre popolazioni nel campione. Inoltre, le matrici derivate da 16S rDNA-Seq sono in genere molto sparse (70-95% di valori nulli). Ciò è dovuto sia alla diversità biologica tra i campioni sia alla perdita di informazione sulle specie rare durante il sequenziamento, un effetto che è fortemente dipendente sia dalla distribuzione solitamente asimmetrica delle abbondanze delle specie presenti nei microbiomi, sia dal numero di campioni sequenziati nella stessa corsa di sequenziamento (il cosiddetto livello di multiplexing). Le suddette peculiarità rendono la comunemente adottata mutuazione di tool e approcci dall’ambito del sequenziamento di tipo bulk RNA inadeguata per analisi di matrici di conte derivanti da 16S rDNA-Seq. In particolare, fasi di pre-elaborazione non specifiche, come la normalizzazione, rischiano di introdurre forti bias in caso di matrici molto sparse. L'obiettivo principale di questa tesi era quello di identificare delle pipeline di analisi ottimali che riempissero le suddette lacune al fine di ottenere conclusioni solide e affidabili dall'analisi dei dati dell'rRNA-Seq 16S. Tra tutte le fasi di analisi incluse in una tipica pipeline, questo progetto si è concentrato sulla pre-elaborazione di matrici di conte ottenute da esperimenti di 16S rDNA-Seq. Questo scopo è stato raggiunto attraverso diversi passaggi. In primo luogo, sono stati identificati metodi all'avanguardia per la pre-elaborazione dei dati di conte di 16S rDNA-Seq eseguendo un'accurata ricerca bibliografica, che ha rivelato una minima disponibilità di strumenti specifici e la completa mancanza nella consueta pipeline di analisi 16S rDNA-Seq di una fase di pre-elaborazione in cui venga recuperata la perdita di informazioni dovuta al sequenziamento (zero-imputation). Allo stesso tempo, la ricerca bibliografica ha evidenziato che non erano disponibili simulatori specifici per ottenere direttamente dati di conte 16S rDNA-Seq sintetici su cui eseguire l'analisi per identificare pipeline di pre-elaborazione ottimali. Di consequenza, è stato sviluppato un simulatore di matrici di conte sparse derivanti da 16S rDNA-Seq che considera la natura composizionale di questi dati. In seguito, un'analisi comparativa completa di quarantanove pipeline di pre-elaborazione è stata progettata ed eseguita con lo scopo di valutare le prestazioni degli approcci di pre-elaborazione più comunemente utilizzati e più recenti e per verificare l’appropriatezza dell’inclusione di una fase di zero-imputation nel contesto delle analisi di 16S rDNA-Seq. Nel complesso, questa tesi considera il problema della pre-elaborazione dei dati provenienti da 16S rDNA-Seq e fornisce una guida utile per una pre-elaborazione dei dati robusta quando durante un'analisi 16S rDNA-Seq. Inoltre, il simulatore proposto in questo lavoro potrebbe essere uno stimolo e uno strumento prezioso per i ricercatori coinvolti nello sviluppo e nel test dei metodi di bioinformatica, contribuendo così a colmare la mancanza di strumenti specifici per i dati di rDNA-Seq 16S.
APA, Harvard, Vancouver, ISO, and other styles
7

NIGRI, ANNA. "Quality data assessment and improvement in pre-processing pipeline to minimize impact of spurious signals in functional magnetic imaging (fMRI)." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2911412.

Full text
Abstract:
In the recent years, the field of quality data assessment and signal denoising in functional magnetic resonance imaging (fMRI) is rapidly evolving and the identification and reduction of spurious signal with pre-processing pipeline is one of the most discussed topic. In particular, subject motion or physiological signals, such as respiratory or/and cardiac pulsatility, were showed to introduce false-positive activations in subsequent statistical analyses. Different measures for the evaluation of the impact of motion related artefacts, such as frame-wise displacement and root mean square of movement parameters, and the reduction of these artefacts with different approaches, such as linear regression of nuisance signals and scrubbing or censoring procedure, were introduced. However, we identify two main drawbacks: i) the different measures used for the evaluation of motion artefacts were based on user-dependent thresholds, and ii) each study described and applied their own pre-processing pipeline. Few studies analysed the effect of these different pipelines on subsequent analyses methods in task-based fMRI.The first aim of the study is to obtain a tool for motion fMRI data assessment, based on auto-calibrated procedures, to detect outlier subjects and outliers volumes, targeted on each investigated sample to ensure homogeneity of data for motion. The second aim is to compare the impact of different pre-processing pipelines on task-based fMRI using GLM based on recent advances in resting state fMRI preprocessing pipelines. Different output measures based on signal variability and task strength were used for the assessment.
APA, Harvard, Vancouver, ISO, and other styles
8

Torkler, Phillipp [Verfasser], and Johannes [Akademischer Betreuer] Söding. "STAMMP : A statistical model and processing pipeline for PAR-CLIP data reveals transcriptome maps of mRNP biogenesis factors / Phillipp Torkler. Betreuer: Johannes Söding." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1072376628/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maarouf, Marwan Younes. "XML Integrated Environment For Service-Oriented Data Management." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1180450288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Severini, Nicola. "Analysis, Development and Experimentation of a Cognitive Discovery Pipeline for the Generation of Insights from Informal Knowledge." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21013/.

Full text
Abstract:
The purpose of this thesis project is to bring the application of Cognitive Discovery to an informal type of knowledge. Cognitive Discovery is a term coined by IBM Research to indicate a series of Information Extraction (IE) processes in order to build a knowledge graph capable of representing knowledge from highly unstructured data such as text. Cognitive Discovery is typically applied to a type of formal knowledge, i.e. of the documented text such as academic papers, business reports, patents, etc. While informal knowledge is provided, for example, by recording a conversation within a meeting or through a Power Point presentation, therefore a type of knowledge not formally defined. The idea behind the project is the same as that of the original Cognitive Discovery project, that is the processing of natural language in order to build a knowledge graph that can be interrogated in different ways. This knowledge graph will have an architecture that will depend on the use case, but tends to be a network of entity nodes connected to each other through a certain semantic relationship and to a certain type of nodes containing structural data such as a paragraph, an image or a slide from a presentation. The creation of this graph requires a series of steps, a data processing pipeline that starting from the raw data (in the specific case of the prototype the audio file of the conversation) a series of features are extracted and processed such as entities, semantic relationships between entities, main concepts etc. Once the graph has been created, it is necessary to define an engine for querying and / or generating insights from the knowledge graph; in general the graph database infrastructure also provides a language for querying the graph, however to make the application usable even for those who do not have the technical knowledge necessary to learn the query language, a component has been defined to process the natural language query to query the graph.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data processing pipeline"

1

A, Engeda, American Society of Mechanical Engineers. Process Industries Division., and International Mechanical Engineering Congress and Exposition (2000 : Orlando, Fla.), eds. Challenges and goals in industrial and pipeline compressors: Presented at the 2000 ASME International Mechanical Engineering Congress and Exposition, November 5-10, 2000, Orlando, Florida. New York, N.Y: American Society of Mechanical Engineers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng shi di xia guan xian xin xi hua yan jiu yu shi jian. Beijing Shi: Beijing you dian da xue chu ban she, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Szczotka, Marek. Metoda sztywnych elementów skończonych w modelowaniu nieliniowych układów w technice morskiej: The rigid finite element method in modeling of nonlinear offshore systems. Gdańsk: Wydawnictwo Politechniki Gdańskiej, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

O'Siadhail, Micheal. Simulation and analysis of gas networks. London: E. & F.N. Spon, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Office, General Accounting. General Services Administration: Response to follow-up questions related to building repairs and alterations and courthouse utilization : [report to] the Honorable Bob Franks, chairman, Subcommittee on Economic Development, Public Buildings, Hazardous Materials, and Pipeline Transportation, Committee on Transportation and Infrastructure, House of Representatives. Washington, D.C: The Office, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Peter, Langsten, American Society of Mechanical Engineers. Pressure Vessels and Piping Division., and Pressure Vessels and Piping Conference (1994 : Minneapolis, Minn.), eds. Advanced computer applications, 1994: Presented at the 1994 Pressure Vessels and Piping Conference, Minneapolis, Minnesota, June 19-23, 1994. New York, N.Y: American Society of Mechanical Engineers, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

K, Karim-Panahi, American Society of Mechanical Engineers. Pressure Vessels and Piping Division., and Pressure Vessels and Piping Conference (1997 : Orlando, Fla.), eds. Advances in analytical, experimental, and computational technologies in fluids, structures, transients, and natural hazards: Presented at the 1997 ASME Pressure Vessels and Piping Conference, Orlando, Florida, July 27-31, 1997. New York, N.Y: American Society of Mechanical Engineers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Psaltis, Andrew. Streaming Data: Understanding the real-time pipeline. Manning Publications, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pipeline geomatics. New York, NY: ASME, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pipeline geomatics. New York, NY: ASME, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Data processing pipeline"

1

Bajcsy, Peter, Joe Chalfoun, and Mylene Simon. "Functionality of Web Image Processing Pipeline." In Web Microanalysis of Big Image Data, 17–40. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63360-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bajcsy, Peter, Joe Chalfoun, and Mylene Simon. "Components of Web Image Processing Pipeline." In Web Microanalysis of Big Image Data, 63–104. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63360-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fournier, Fabiana, and Inna Skarbovsky. "Real-Time Data Processing." In Big Data in Bioeconomy, 147–56. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71069-9_11.

Full text
Abstract:
AbstractTo remain competitive, organizations are increasingly taking advantage of the high volumes of data produced in real time for actionable insights and operational decision-making. In this chapter, we present basic concepts in real-time analytics, their importance in today’s organizations, and their applicability to the bioeconomy domains investigated in the DataBio project. We begin by introducing key terminology for event processing, and motivation for the growing use of event processing systems, followed by a market analysis synopsis. Thereafter, we provide a high-level overview of event processing system architectures, with its main characteristics and components, followed by a survey of some of the most prominent commercial and open source tools. We then describe how we applied this technology in two of the DataBio project domains: agriculture and fishery. The devised generic pipeline for IoT data real-time processing and decision-making was successfully applied to three pilots in the project from the agriculture and fishery domains. This event processing pipeline can be generalized to any use case in which data is collected from IoT sensors and analyzed in real-time to provide real-time alerts for operational decision-making.
APA, Harvard, Vancouver, ISO, and other styles
4

Rengarajan, Krushnaa, and Vijay Krishna Menon. "Generalizing Streaming Pipeline Design for Big Data." In Machine Intelligence and Signal Processing, 149–60. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1366-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brown, David M., Adriana Soto-Corominas, Juan Luis Surez, and Javier de la Rosa. "Overview – The Social Media Data Processing Pipeline." In The SAGE Handbook of Social Media Research Methods, 125–45. 1 Oliver's Yard, 55 City Road London EC1Y 1SP: SAGE Publications Ltd, 2016. http://dx.doi.org/10.4135/9781473983847.n9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Katti, Anantshesh, and M. Sumana. "Pipeline for Pre-processing of Audio Data." In IOT with Smart Systems, 191–98. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3575-6_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lepsien, Arvid, Agnes Koschmider, and Wolfgang Kratsch. "Analytics Pipeline for Process Mining on Video Data." In Lecture Notes in Business Information Processing, 196–213. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-41623-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghantasala, Saicharan, Shabarni Gupta, Vimala Ashok Mani, Vineeta Rai, Tumpa Raj Das, Panga Jaipal Reddy, and Veenita Grover Shah. "Omics: Data Processing and Analysis." In Biomarker Discovery in the Developing World: Dissecting the Pipeline for Meeting the Challenges, 19–39. New Delhi: Springer India, 2016. http://dx.doi.org/10.1007/978-81-322-2837-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ashwini, Akanksha, and Jaerock Kwon. "Image Processing Pipeline for Web-Based Real-Time 3D Visualization of Teravoxel Volumes." In Data Mining and Big Data, 203–12. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93803-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhonghu, Li, Ma Bo, Wang Jinming, Yan Junhong, and Wang Luling. "Design of Pipeline Leak Data Acquisition and Processing System." In Advances in Intelligent Systems and Computing, 355–61. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00214-5_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data processing pipeline"

1

Li, Liling, Tyler Danner, Jesse Eickholt, Erin McCann, Kevin Pangle, and Nicholas Johnson. "A distributed pipeline for DIDSON data processing." In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8258458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krismentari, Ni Kadek Bumi, I. Made Oka Widyantara, Ngurah Indra ER, I. Made Dwi Putra Asana, I. Putu Noven Hartawan, and I. Gede Sudiantara. "Data Pipeline Framework for AIS Data Processing." In 2022 Seventh International Conference on Informatics and Computing (ICIC). IEEE, 2022. http://dx.doi.org/10.1109/icic56845.2022.10006941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Svyatkovskiy, A., K. Imai, M. Kroeger, and Y. Shiraito. "Large-scale text processing pipeline with Apache Spark." In 2016 IEEE International Conference on Big Data (Big Data). IEEE, 2016. http://dx.doi.org/10.1109/bigdata.2016.7841068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meyers, Bennet E., Elpiniki Apostolaki-Iosifidou, and Laura T. Schelhas. "Solar Data Tools: Automatic Solar Data Processing Pipeline." In 2020 IEEE 47th Photovoltaic Specialists Conference (PVSC). IEEE, 2020. http://dx.doi.org/10.1109/pvsc45281.2020.9300847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Thomas, and Larry Preheim. "Data Processing Pipeline With Transaction-Oriented Data Sharing." In Space OPS 2004 Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2004. http://dx.doi.org/10.2514/6.2004-445-259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beard, Andrew, Bruce Cowan, and Andrew Ferayorni. "DKIST visible broadband imager data processing pipeline." In SPIE Astronomical Telescopes + Instrumentation, edited by Gianluca Chiozzi and Nicole M. Radziwill. SPIE, 2014. http://dx.doi.org/10.1117/12.2057122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mwebaze, Johnson, Danny Boxhoorn, and Edwin Valentijn. "Dynamic Pipeline Changes in Scientific Data Processing." In 2011 IEEE 7th International Conference on E-Science (e-Science). IEEE, 2011. http://dx.doi.org/10.1109/escience.2011.44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Javed, M. Haseeb, Xiaoyi Lu, and Dhabaleswar K. (DK) Panda. "Characterization of Big Data Stream Processing Pipeline." In UCC '17: 10th International Conference on Utility and Cloud Computing. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3148055.3148068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Griffin, Matt, C. Darren Dowell, Tanya Lim, George Bendo, Jamie Bock, Christophe Cara, Nieves Castro-Rodriguez, et al. "The Herschel-SPIRE photometer data processing pipeline." In SPIE Astronomical Telescopes + Instrumentation, edited by Jacobus M. Oschmann, Jr., Mattheus W. M. de Graauw, and Howard A. MacEwen. SPIE, 2008. http://dx.doi.org/10.1117/12.788431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sukumar, Sushmi Thushara, Chung-Horng Lung, and Marzia Zaman. "Knowledge Graph Generation for Unstructured Data Using Data Processing Pipeline." In 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, 2023. http://dx.doi.org/10.1109/compsac57700.2023.00068.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data processing pipeline"

1

Berres, Anne Sabine, Vignesh Adhinarayanan, Terece Turton, Wu Feng, and David Honegger Rogers. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids. Office of Scientific and Technical Information (OSTI), May 2017. http://dx.doi.org/10.2172/1357102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chambers. PR-348-09602-R01 Determine New Design and Construction Techniques for Transportation of Ethanol. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), March 2013. http://dx.doi.org/10.55274/r0010546.

Full text
Abstract:
This report summarizes results of the research study titled, �Determine New Design and Construction Techniques for Transportation of Ethanol and Ethanol/Gasoline Blends in New Pipelines� (WP #394 / DTPH56-09-T-000003). It was prepared for the United States Department of Transportation, Pipeline and Hazardous Materials Safety Administration, Office of Pipeline Safety. The technical tasks in this study included activities to characterize the impact of selected metallurgical processing and fabrication variables on ethanol stress corrosion cracking (ethanol SCC) of new pipeline steels, develop a better understanding of conditions that cause susceptibility to ethanol SCC in fuel grade ethanol (FGE) to support better monitoring and control, and develop data / insights to provide industry-recognized standards and guidelines to reduce the occurrence of ethanol SCC. This research was approached through a collaboration of Honeywell Process Solutions (Honeywell), the Edison Welding Institute (EWI), and Electricore Inc. (prime contractor) with oversight and co-funding by the Pipeline Research Council International (PRCI) and Colonial Pipeline. The program's tasks were as follows: Evaluation of Steel Microstructure Effect on Ethanol SCC Resistance Effects of Welding and Residual Stress Evaluation of Surface Treatment Effects Evaluate Effects of Pipe Manufacturing Process Specification of Polymeric Materials for New Construction Control and Monitoring of Oxygen Uptake Internal Corrosion Monitoring Standardization of SCC Test Methods Roadmap for Industry Guidelines for Safe and Reliable Pipeline Handling of FGE
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, George, Grang Mei, Bulent Ayhan, Chiman Kwan, and Venu Varma. DTRS57-04-C-10053 Wave Electromagnetic Acoustic Transducer for ILI of Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), March 2005. http://dx.doi.org/10.55274/r0012049.

Full text
Abstract:
In this project, Intelligent Automation, Incorporated (IAI) and Oak Ridge National Lab (ORNL) propose a novel and integrated approach to inspect the mechanical dents and metal loss in pipelines. It combines the state-of-the-art SH wave Electromagnetic Acoustic Transducer (EMAT) technique, through detailed numerical modeling, data collection instrumentation, and advanced signal processing and pattern classifications, to detect and characterize mechanical defects in the underground pipeline transportation infrastructures. The technique has four components: (1) thorough guided wave modal analysis, (2) recently developed three-dimensional (3-D) Boundary Element Method (BEM) for best operational condition selection and defect feature extraction, (3) ultrasonic Shear Horizontal (SH) waves EMAT sensor design and data collection, and (4) advanced signal processing algorithm like a nonlinear split-spectrum filter, Principal Component Analysis (PCA) and Discriminant Analysis (DA) for signal-to-noise-ratio enhancement, crack signature extraction, and pattern classification. This technology not only can effectively address the problems with the existing methods, i.e., to detect the mechanical dents and metal loss in the pipelines consistently and reliably but also it is able to determine the defect shape and size to a certain extent.
APA, Harvard, Vancouver, ISO, and other styles
4

Ruby, Jeffrey, Richard Massaro, John Anderson, and Robert Fischer. Three-dimensional geospatial product generation from tactical sources, co-registration assessment, and considerations. Engineer Research and Development Center (U.S.), February 2023. http://dx.doi.org/10.21079/11681/46442.

Full text
Abstract:
According to Army Multi-Domain Operations (MDO) doctrine, generating timely, accurate, and exploitable geospatial products from tactical platforms is a critical capability to meet threats. The US Army Corps of Engineers, Engineer Research and Development Center, Geospatial Research Laboratory (ERDC-GRL) is carrying out 6.2 research to facilitate the creation of three-dimensional (3D) products from tactical sensors to include full-motion video, framing cameras, and sensors integrated on small Unmanned Aerial Systems (sUAS). This report describes an ERDC-GRL processing pipeline comprising custom code, open-source software, and commercial off-the-shelf (COTS) tools to geospatially rectify tactical imagery to authoritative foundation sources. Four datasets from different sensors and locations were processed against National Geospatial-Intelligence Agency–supplied foundation data. Results showed that the co-registration of tactical drone data to reference foundation varied from 0.34 m to 0.75 m, exceeding the accuracy objective of 1 m described in briefings presented to Army Futures Command (AFC) and the Assistant Security of the Army for Acquisition, Logistics and Technology (ASA(ALT)). A discussion summarizes the results, describes steps to address processing gaps, and considers future efforts to optimize the pipeline for generation of geospatial data for specific end-user devices and tactical applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Weeks and Dash Weeks. L52336 Weld Design Testing and Assessment Procedures for High-strength Pipelines Curved Wide Plate Tests. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), December 2011. http://dx.doi.org/10.55274/r0010452.

Full text
Abstract:
A variety of mechanical property tests are performed in the design, construction and maintenance phase of a pipeline. Most of the tests are performed by use of small-scale specimens with size typically in the range of a few inches to tens of inches (1 in = 25.4 mm). There are numerous test labs capable of performing most small-scale tests. These tests can be performed effectively under a variety of conditions, e.g., test temperature, strain rate, and loading configuration. More importantly, most routine small-scale tests are performed in accordance with national and international standards, ensuring the consistency of testing procedures. To confirm pipeline designs and validate material performance, it is desirable to test girth welds under realistic service conditions. Full-scale tests can incorporate certain realistic features that small-scale specimens cannot. However, these tests can be time-consuming and expensive to conduct. Very few labs can perform the tests, even with months of start-up and preparation time. There are no generally accepted, consistent test procedures among different test labs. The data acquisition and post-processing may differ from lab to lab, creating difficulties in data comparison. Full-scale tests can only be performed under selected conditions as a supplemental tool to the small-scale tests. The work described in this report focuses on the development of test procedures and instrumentation requirements for curved-wide-plate (CWP) tests. The results of this work can be used for: Developing a test methodology to measure the physical response of a finite-length surface-breaking flaw to axial loads applied to a girth welded line pipe section, Determining the appropriate instrumentation to fully characterize the global stress/strain response of the CWP specimen during loading, Evaluating the applicability of the test methodology for sub-ambient temperatures, and Developing a standardized test procedure for CWP testing with a wide range of test parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Leis. L51845 Database of Mechanical and Toughness Properties of Pipe. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), December 2000. http://dx.doi.org/10.55274/r0010150.

Full text
Abstract:
�The lower-strength grades of steel used for transmission pipelines into the 60s were much like those used in other steel construction in that era. These steels gained strength by traditional hardening mechanisms through chemistry changes, largely involving carbon and manganese additions. Improvement of these grades, primarily through control of ingot chemistry and steel processing, became necessary when running brittle fracture was identified as a failure mechanism in gas-transmission pipelines in the late 50s. Eventually, this avenue to increasing strength was exhausted for pipeline applications because this approach causes increased susceptibility to hydrogen-related cracking mechanisms as strength increases. For this reason, modern steels differ significantly from their predecessors in several ways, with the transition from traditional C-Mn ferrite-pearlite steels beginning in the mid 60s with the introduction of high-strength-low-alloy (HSLA) steels. This report presents the results of projects, PR-3-9606 and PR-3-9737, both of which were planned as multi-year projects. The first of these projects initially was conceived to provide broad evaluation of the fitness-for-service of wrinkle bends while the second was conceived to generate mechanical and fracture properties data for use in the integrity analysis of both the pipe body and weld seams in modern gas-transmission pipeline systems. As possible duplication between a joint industry project and the PRCI project became apparent, this project was scaled back to focus on properties of steels used in construction involving wrinkle bends. Consideration also was given to a more modern steel such as might be found in ripple bends, which are formed in bending machines that now have become widely used. The second project likewise was reduced in scope, with a focus on only the pipe body. Because both projects ended being centered on mechanical and fracture properties, both are presented in this combination report.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography