To see the other types of publications on this topic, follow the link: Biology – Research – Data processing.

Dissertations / Theses on the topic 'Biology – Research – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Biology – Research – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shi, H. (Henglin). "A GQM-based open research data technology evalution method in open research context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221853.

Full text
Abstract:
Open Research Data is gaining popularity nowadays, and various research units and individuals are interested to join this trend. However, due to variety of Open Research Data technologies, they have found it is difficult to select proper ones for their specific requirements. Thus, a method for evaluating of Open Research Data related technologies is developed in this study for researchers to select proper ones. Firstly, the theoretical knowledge of research data sharing and reusing barriers is resulted from a structured literature review. As a result, from the 19 primary studies, 96 instances of existing barriers are identified and classified to seven categories, where four of them are research data sharing barriers and rest of them are reusing barriers. This knowledge is regarded as an important resource for understanding researchers’ requirements on Open Research Data technologies, and utilized to develop the technology evaluation method. Additionally, the Open Research Data Technology Evaluation Method (ORDTEM) is developed basing on the Goal/Question/Metric (GQM) approach and resulted research data sharing and reusing barriers. To develop this method, the GQM approach is adopted as the main skeleton to transform these barriers to measurable criterion. Consequently, the ORDTEM, which is consisting of six GQM evaluation questions and 14 metrics, is developed for researchers to evaluate Open Research Data technologies. Furthermore, to validate the GQM-based ORDTEM, a focus groups study is conducted in a workshop. In the workshop, nine researchers who has the need to participate Open Research Data related activities are recruited to form a focus group to discuss the resulted ORDTEM. And by analysing the content of the discussion, 16 critical opinions are addressed which resulted eight improvements including one refinement on an existing metric and seven new metrics to ORDTEM. Lastly, a testing process of applying ORDTEM to evaluate four selected Open Research Data technologies is implemented also for validating whether it can be used in solving real-world evaluation tasks. And more than the validation, this experiment also results the materials about usage of ORDTEM, which is useful for future adopters. However, more than developing the solution to eliminate the difficulty of selecting technologies for participating Open Research Data movements, this study also provides two additional contributions. For one thing, resulted research data sharing and reusing barriers also direct the future effort to prompt Open Research Data and Open Science. Moreover, the experience of utilizing the GQM approach to transform existing requirements to evaluation criterion is possible to be studied for developing other requirement-specific evaluation.
APA, Harvard, Vancouver, ISO, and other styles
2

Lynch, Kevin John. "Data manipulation in collaborative research systems." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184923.

Full text
Abstract:
This dissertation addresses data manipulation in collaborative research systems, including what data should be stored, the operations to be performed on that data, and a programming interface to effect this manipulation. Collaborative research systems are discussed, and requirements for next-generation systems are specified, incorporating a range of emerging technologies including multimedia storage and presentation, expert systems, and object-oriented database management systems. A detailed description of a generic query processor constructed specifically for one collaborative research system is given, and its applicability to next-generation systems and emerging technologies is examined. Chapter 1 discusses the Arizona Analyst Information System (AAIS), a successful collaborative research system being used at the University of Arizona and elsewhere. Chapter 2 describes the generic query processing approach used in the AAIS, as an efficient, nonprocedural, high-level programmer interface to databases. Chapter 3 specifies requirements for next-generation collaborative research systems that encompass the entire research cycle for groups of individuals working on related topics over time. These requirements are being used to build a next-generation collaborative research system at the University of Arizona called CARAT, for Computer Assisted Research and Analysis Tool. Chapter 4 addresses the underlying data management systems in terms of the requirements specified in Chapter 3. Chapter 5 revisits the generic query processing approach used in the AAIS, in light of the requirements of Chapter 3, and the range of data management solutions described in Chapter 4. Chapter 5 demonstrates the generic query processing approach as a viable one, for both the requirements of Chapter 3 and the DBMSs of Chapter 4. The significance of this research takes several forms. First, Chapters 1 and 3 provide detailed views of a current collaborative research system, and of a set of requirements for next-generation systems based on years of experience both using and building the AAIS. Second, the generic query processor described in Chapters 2 and 5 is shown to be an effective, portable programming language to database interface, ranging across the set of requirements for collaborative research systems as well as a number of underlying data management solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Haji, Maghsoudi Omid. "Software Development for Neuroscience, Biology, and Biomechanics Applications." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/529331.

Full text
Abstract:
Bioengineering<br>Ph.D.<br>Understanding locomotion is an important focus of modern science. Our health and well-being are directly linked to our movement. Animal, including human, movement can explain many biological phenomena. Also, it impacts our ability to treat musculoskeletal injuries and neurological disorders, improve prosthetic limbs, and construct agile legged robots. Two fundamental methods used in locomotion research, especially in the field of neuroscience, are 1) quantification of kinematics from videography, and 2) the creation of stable internal neural interfaces using metal electrodes. With the recent explosion of computer vision algorithms for gathering meaning from video, robotic tools for physical interaction, and a bevy of new genetic tools with which to manipulate the nervous system in intact, freely behaving rodents, there is a need for software that applies these advancements to movement science problems. These tools are especially important now as perturbation based research, where internal or external perturbations are applied to a moving animal in order to better dissect the mechanisms of movement, become more common. To address the need in the former area, we present Python-based software to segment and track landmarks from multiple views, high-speed video using color and 3D information, producing and analyzing kinematics in 3D. This software produces kinematics from raw multiple camera video, and furthermore can perform joint angle analyses in 2D or 3D, a standard technique in locomotor biomechanics. The software has been evaluated using 20 animals and under different conditions (e.g., intact, spinal cord injured, and aged animals). To address the need in the area of neural interfacing, we present open source Matlab software to acquire, characterize, and model the impedance spectra of metal electrodes in solution. Requiring only Matlab and standard data acquisition hardware, the software measures the magnitude and phase of the interface, and fits the most commonly used Randles model. The software was evaluated using five custom-made nerve cuffs. The Randles model parameters, including the constant phase element, were calculated and are in good agreement with the literature. Together, these tools will aid researchers in movement science and related fields.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Herzberg, Nico, and Mathias Weske. "Enriching raw events to enable process intelligence : research challenges." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6401/.

Full text
Abstract:
Business processes are performed within a company’s daily business. Thereby, valuable data about the process execution is produced. The quantity and quality of this data is very dependent on the process execution environment that reaches from predominantly manual to fullautomated. Process improvement is one essential cornerstone of business process management to ensure companies’ competitiveness and relies on information about the process execution. Especially in manual process environments data directly related to the process execution is rather sparse and incomplete. In this paper, we present an approach that supports the usage and enrichment of process execution data with context data – data that exists orthogonally to business process data – and knowledge from the corresponding process models to provide a high-quality event base for process intelligence subsuming, among others, process monitoring, process analysis, and process mining. Further, we discuss open issues and challenges that are subject to our future work.<br>Die wertschöpfenden Tätigkeiten in Unternehmen folgen definierten Geschäftsprozessen und werden entsprechend ausgeführt. Dabei werden wertvolle Daten über die Prozessausführung erzeugt. Die Menge und Qualität dieser Daten ist sehr stark von der Prozessausführungsumgebung abhängig, welche überwiegend manuell als auch vollautomatisiert sein kann. Die stetige Verbesserung von Prozessen ist einer der Hauptpfeiler des Business Process Managements, mit der Aufgabe die Wettbewerbsfähigkeit von Unternehmen zu sichern und zu steigern. Um Prozesse zu verbessern muss man diese analysieren und ist auf Daten der Prozessausführung angewiesen. Speziell bei manueller Prozessausführung sind die Daten nur selten direkt zur konkreten Prozessausführung verknüpft. In dieser Arbeit präsentieren wir einen Ansatz zur Verwendung und Anreicherung von Prozessausführungsdaten mit Kontextdaten – Daten die unabhängig zu den Prozessdaten existieren – und Wissen aus den dazugehörigen Prozessmodellen, um ein hochwertige Event- Datenbasis für Process Intelligence Anwendungen, wie zum Beispiel Prozessmonitoring, Prozessanalyse und Process Mining, sicherstellen zu können. Des Weiteren zeigen wir offene Fragestellungen und Herausforderungen auf, welche in Zukunft Gegenstand unserer Forschung sein werden.
APA, Harvard, Vancouver, ISO, and other styles
5

Chan, Pui-yee, and 陳沛儀. "A study on predicting gene relationship from a computational perspective." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30461352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yim, Cheuk-hon Terence, and 嚴卓漢. "Approximate string alignment and its application to ESTs, mRNAs and genome mapping." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31455736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Phipps, Owen Dudley. "The use of a database to improve higher order thinking skills in secondary school biology: a case study." Thesis, Rhodes University, 1994. http://hdl.handle.net/10962/d1003696.

Full text
Abstract:
The knowledge explosion of the last decade has left education in schools far behind. The emphasis in schools must change if they are to prepare students for their future lives. Tertiary institutions as well as commerce and industry need people who have well-developed cognitive skills. A further requirement is that the school leaver must have skills pertaining to information processing. The skills that are required are those which have been labelled higher order thinking skills. The work of Piaget, Thomas and Bloom have led to a better understanding of what these skills actually are. Resnick sees these skills as being: nonalgorithmic; complex; yielding multiple solutions; involving nuanced judgements; involving the application of multiple criteria; involving uncertainty; involving self-regulation of the thinking process; imposing meaning and being effortful. How these can be taught and the implication of doing so are considered by the researcher. The outcome of this consideration is that higher order - thinking entails communication skills, reasoning, problem solving and self management. The study takes the form of an investigation of a particular case: whether a Biology field trip could be used as a source of information, which could be handled by a computer, so that higher order thinking skills could be acquired by students. Students were instructed in the use of a Database Management System called PARADOX. The students then went on an excursion to a Rocky Shore habitat to collect data about the biotic and abiotic factors pertaining to that ecosystem. The students worked in groups sorting data and entering it into the database. Once all the data had been entered the students developed hypotheses and queried the database to obtain evidence to substantiate or disprove their hypotheses. Whilst this was in progress the researcher obtained data by means of observational field notes, tape recordings, evoked documents and interviews. The qualitative data was then arranged into classes to see if it showed that the students were using any of the higher order thinking skills. The results showed that the students did use the listed higher order thinking skills whilst working on the database.
APA, Harvard, Vancouver, ISO, and other styles
8

Schabort, Willem Petrus Du Toit. "Integration of kinetic models with data from 13C-metabolic flux experiments." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

韓永楷 and Wing-kai Hon. "Distance metrics for phylogenies with non-uniform degrees." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Albertus, Yumna. "Critical analysis of techniques for normalising electromyographic data : from laboratory to clinical research." Doctoral thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/3221.

Full text
Abstract:
Includes abstract.<br>Includes bibliographical references (p. 185-201).<br>Measurements of muscle activity derived from surface EMG electrodes are variable due to both intrinsic and extrinsic factors. The intrinsic factors are endogenous in nature (features within the body) and include muscle fiber type, muscle fiber diameter and length, the amount of tissue between muscle and electrode, and depth and location of muscle with respect to the placement of electrodes (24). These biological factors vary between subjects and cannot be controlled. The extrinsic factors are experimental variables which are influenced by the researcher and can be controlled to some extent. Examples of extrinsic factors include the location, area, orientation, shape of electrodes and the distance between electrodes (interelectrode distance). In order to measure biological variation in the EMG signal, which is important in studies where surface EMG is used to gain understanding of physiological regulation, it is important to minimise the variation caused by these factors. This is in part achieved through the appropriate method of normalisation. The isometric maximal voluntary contraction (MVC) has been used as a standardmethod of normalisation for both static and dynamic exercises. However, researchers have recently improved the methods of normalisation by developing alternative techniques for the measurement of EMG during dynamic activities. By using the same type of movement for normalisation as during the trial, experimental errors can be reduced. The appropriate method of normalisation is defined as a method that is capable of showing repeatability, reliability (low intra-subject variation) and sensitivity to changes in EMG amplitude that is due to biological change and not the contribution of experimental factors. The aim of this thesis was to critically analyse alternative methods of EMG normalisation during dynamic exercise. The data should provide possible guidelines to researchers who are planning studies involving measurement of EMG activity during cycling, running and in clinical populations. Furthermore, the thesis aimed to illustrate that decisions regarding the most appropriate method of normalisation should be based on the study design, research question (absolute muscle activity or changes in muscle pattern) and the muscles being investigated.
APA, Harvard, Vancouver, ISO, and other styles
11

Hagedorn, Benjamin, Michael Schöbel, Matthias Uflacker, Flavius Copaciu, and Nikola Milanovic. "Proceedings of the fall 2006 workshop of the HPI research school on service-oriented systems engineering." Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2009/3305/.

Full text
Abstract:
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
APA, Harvard, Vancouver, ISO, and other styles
12

Olivier, Hannelore. "Musical networks : the case for a neural network methodology in advertisement music research." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/16618.

Full text
Abstract:
Thesis (M.Mus.)--University of Stellenbosch, 2005.<br>ENGLISH ABSTRACT: Countless scientists had been struggling for centuries to find a significant connection between cognition, emotion and reasoning – resulting in today’s rather embarrassingly imperfect understanding of even the most basic human cognition. We should apprehend that it is unlikely that major breakthroughs in the Cognitive Sciences, Psychology, Sociology or the Medical Sciences will elucidate everything about the human brain and -behaviour in the very near future. Realizing this, it is realistic that we should transfer our attention to things that we do know and understand, and reconsider the power that lies in the integration of results and an interdisciplinary perspective in research. Using the tools we have to our disposal today – digital tools such as ANNs which did not exist a few decades before – this is actually readily viable today. This thesis demonstrates that it is possible to break the traditional boundaries that have periodically prevented the Humanities and the Natural Sciences to join forces towards a greater understanding of human beings. By using ANNs, we are able to merge data from any subfield within the Humanities and Natural Sciences in a single study. The results, interpretations and applications which could develop from such a study would certainly be more inclusive than those derived from research conducted in one or two of these fields in isolation. Sufficient evidence is provided in this dissertation to support a methodology which employs an artificial neural network to assist with decision-making processes related to the choice of advertisement music. The main objective of this endeavour is to establish the feasibility of combining data from many diverse fields, in the creation of an ANN that can be helpful in research regarding South African advertisement music. The thesis explores the notion that knowledge from many interdisciplinary study fields ought to play a leading role in the creation and assessment of effective, target-group-specific advertisement music. In obtaining this goal, it examines the probability of producing a computer-based tool which can assist people working in the advertising industry to obtain an educated match between product, consumer, and advertisement music. Taking a multidisciplinary point of view, the author suggests a methodology for the design of a digital tool in the form of a musical network model. It is concluded that, by using this musical network, it is indeed possible to guarantee a functional musically-paired commercial, which effectively addresses its target-group and has an appropriate emotional effect in support of the marketing goals of the advertising agent. The thesis also demonstrates that it is possible to gain new insights regarding a fairly unstudied discipline, without necessarily conducting new research studies in the specified field. The thesis proves that - by taking an interdisciplinary approach and by using ANNs - it is possible to attain new data that is scientifically valid, even in an unacknowledged field such as South African advertisement music. Although the scope of the thesis does not provide for the actual implementation of the musical network, the feasibility of the conceptual idea is thoroughly examined, and it is concluded that the theory in it’s entirely is definitely feasible, and can be implemented in a future study.<br>AFRIKAANSE OPSOMMING: Vir eeue al probeer wetenskaplikes ‘n betekenisvolle verwantskap tussen denke, emosie en redenasie vind. Nietemin het ons vandag slegs ‘n beperkte begrip van selfs die mees basiese menslike kognisie. Ons moet besef dat dit onwaarskynlik is dat deurbrake in die Kognitiewe Wetenskappe, Sielkunde, Sosiologie of die Mediese Wetenskap in die nabye toekoms die volle funksionaliteit van die menslike brein en gedrag sal bekendmaak. Met inagname hiervan, is ‘n aandagsverskuiwing geoorloof - na die dinge wat ons wel weet en verstaan. Die enorme potensiaal opgesluit in die integrasie van resultate en ‘n interdissiplinêre navorsingsperspektief behoort gevolglik heroorweeg te word. Ons beskik tans oor meer as voldoende digitale hulpbronne, waaronder kunsmatige neurale netwerke, wat wel so ‘n benadering kan bewerkstellig. In hierdie tesis word daar gedemonstreer dat dit moontlik is om die grense wat tradisioneel ‘n samewerking tussen die Geestes- en Natuurwetenskappe beperk het, af te breek - ‘n werkswyse wat noodwendig sal lei tot ‘n beter begrip van die mens. Kunsmatige neurale netwerke maak dit moontlik om navorsingsdata uit die Geestes- en Natuurwetenskappe te kombineer in ‘n enkele onderneming. Die bevindinge, interpretasies en toepasings wat potensieel uit so ‘n metodologie sou kon voortspruit, is sonder twyfel meer omvattend as dié afkomstig vanuit ‘n eendimensionele studie. Voldoende bewyse word deur die loop van hierdie studie voorgehou ter ondersteuning van ‘n kunsmatige neurale netwerk-metodologie in die assistering van besluitnemingsprosesse rakende advertensiemusiek. Die hoofdoelwit van die onderneming is om te toets of die ontwerp van ‘n kunsmatige neurale netwerk - deur die kombinasie van data uit diverse studierigtings - wel geoorloof en funksioneel sou kon wees. Die aanname dat inligting uit ‘n aantal interdissiplinêre studierigtings ‘n prominente rol behoort te speel tydens die skep en beoordeling van effektiewe, teikengroep-gerigte advertensiemusiek, word gevolglik ondersoek. Om hierdie objektief te bewerkstellig, word die waarskynlikheid bestudeer na die ontwerp van ‘n rekenaargebaseerde hulpbron - wat mense in die advertensiewese behulpsaam kan wees om ‘n berekende en ingeligte keuse uit te oefen om produk, verbruiker en advertensiemusiek te laat pas. Die outeur benader die probleem vanuit ‘n multidissiplinêne oogpunt, en stel ‘n werkswyse voor vir die ontwerp van ‘n digitale hulpbron – in die vorm van ‘n musikale netwerk model. Daar word bevind dat - deur die gebruik van die voorgestelde model, dit wel moontlik is om die funksionaliteit van ‘n musiekgepaarde advertensie te verseker. Verder word daar gedemonstreer dat nuwe insigte rakende ‘n grotendeels afgeskeepte studierigting soos Suid-Afrikaanse advertensiemusiek moeiteloos bekom kan word, sonder om noodwendig navorsing binne die spesifieke gebied te loods. Laasgenoemde is doenbaar deur ‘n interdissiplinêre navorsingsbenadering, gekombineerd met ‘n kunsmatige neurale netwerk-metodologie. Die omvang van hierdie studie maak nie voorsiening vir die implementering van die musikale netwerk nie. Nietemin word die werkbaarheid van die konseptuele idee in diepte ondersoek, met die gevolgtrekking dat die teorie in sy geheel sonder twyfel prakties is, en in ‘n toekomstige studie geïmplementeer kan word.
APA, Harvard, Vancouver, ISO, and other styles
13

Kesterton, Anthony James. "The synthesis of sound with application in a MIDI environment." Thesis, Rhodes University, 1991. http://hdl.handle.net/10962/d1006701.

Full text
Abstract:
The wide range of options for experimentation with the synthesis of sound are usually expensive, difficult to obtain, or limit the experimenter. The work described in this thesis shows how the IBM PC and software can be combined to provide a suitable platform for experimentation with different synthesis techniques. This platform is based on the PC, the Musical Instrument Digital Interface (MIDI) and a musical instrument called a digital sampler. The fundamental concepts of sound are described, with reference to digital sound reproduction. A number of synthesis techniques are described. These are evaluated according to the criteria of generality, efficiency and control. The techniques discussed are additive synthesis, frequency modulation synthesis, subtractive synthesis, granular synthesis, resynthesis, wavetable synthesis, and sampling. Spiral synthesis, physical modelling, waveshaping and spectral interpolation are discussed briefly. The Musical Instrument Digital Interface is a standard method of connecting digital musical instruments together. It is the MIDI standard and equipment conforming to that standard that makes this implementation of synthesis techniques possible. As a demonstration of the PC platform, additive synthesis, frequency modulation synthesis, granular synthesis and spiral synthesis have been implemented in software. A PC equipped with a MIDI interface card is used to perform the synthesis. The MIDI protocol is used to transmit the resultant sound to a digital sampler. The INMOS transputer is used as an accelerator, as the calculation of a waveform using software is a computational intensive process. It is concluded that sound synthesis can be performed successfully using a PC and the appropriate software, and utilizing the facilities provided by a MIDI environment including a digital sampler.
APA, Harvard, Vancouver, ISO, and other styles
14

Kishore, Annapoorni. "AN INTERNSHIP WITH ENVIRONMENTAL SYSTEMS RESEARCH INSTITUTE." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1209153230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Suwarno, Neihl Omar 1963. "A computer based data acquisition and analysis system for a cardiovascular research laboratory." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/558111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pafilis, Evangelos. "Web-based named entity recognition and data integration to accelerate molecular biology research." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:16-opus-89706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Leung, Shuen-yi, and 梁舜頤. "Predicting metabolic pathways from metabolic networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kogelnik, Andreas Matthias. "Biological information management with application to human genome data." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Schobel, Seth Adam Micah. "The viral genomics revolution| Big data approaches to basic viral research, surveillance, and vaccine development." Thesis, University of Maryland, College Park, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10011480.

Full text
Abstract:
<p> Since the decoding of the first RNA virus in 1976, the field of viral genomics has exploded, first through the use of Sanger sequencing technologies and later with the use next-generation sequencing approaches. With the development of these sequencing technologies, viral genomics has entered an era of big data. New challenges for analyzing these data are now apparent. Here, we describe novel methods to extend the current capabilities of viral comparative genomics. Through the use of antigenic distancing techniques, we have examined the relationship between the antigenic phenotype and the genetic content of influenza virus to establish a more systematic approach to viral surveillance and vaccine selection. Distancing of Antigenicity by Sequence-based Hierarchical Clustering (DASH) was developed and used to perform a retrospective analysis of 22 influenza seasons. Our methods produced vaccine candidates identical to or with a high concordance of antigenic similarity with those selected by the WHO. In a second effort, we have developed VirComp and OrionPlot: two independent yet related tools. These tools first generate gene-based genome constellations, or genotypes, of viral genomes, and second create visualizations of the resultant genome constellations. VirComp utilizes sequence-clustering techniques to infer genome constellations and prepares genome constellation data matrices for visualization with OrionPlot. OrionPlot is a java application for tailoring genome constellation figures for publication. OrionPlot allows for color selection of gene cluster assignments, customized box sizes to enable the visualization of gene comparisons based on sequence length, and label coloring. We have provided five analyses designed as vignettes to illustrate the utility of our tools for performing viral comparative genomic analyses. Study three focused on the analysis of respiratory syncytial virus (RSV) genomes circulating during the 2012- 2013 RSV season. We discovered a correlation between a recent tandem duplication within the G gene of RSV-A and a decrease in severity of infection. Our data suggests that this duplication is associated with a higher infection rate in female infants than is generally observed. Through these studies, we have extended the state of the art of genotype analysis, phenotype/genotype studies and established correlations between clinical metadata and RSV sequence data.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Boyle, John K. "Performance Metrics for Depth-based Signal Separation Using Deep Vertical Line Arrays." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2198.

Full text
Abstract:
Vertical line arrays (VLAs) deployed below the critical depth in the deep ocean can exploit reliable acoustic path (RAP) propagation, which provides low transmission loss (TL) for targets at moderate ranges, and increased TL for distant interferers. However, sound from nearby surface interferers also undergoes RAP propagation, and without horizontal aperture, a VLA cannot separate these interferers from submerged targets. A recent publication by McCargar and Zurk (2013) addressed this issue, presenting a transform-based method for passive, depth-based separation of signals received on deep VLAs based on the depth-dependent modulation caused by the interference between the direct and surface-reflected acoustic arrivals. This thesis expands on that work by quantifying the transform-based depth estimation method performance in terms of the resolution and ambiguity in the depth estimate. Then, the depth discrimination performance is quantified in terms of the number of VLA elements.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Yang. "Data mining methods for single nucleotide polymorphisms analysis in computational biology." HKBU Institutional Repository, 2011. http://repository.hkbu.edu.hk/etd_ra/1287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Boying, Lu, Zhang Jun, Nie Shuhui, and Huang Xinjian. "AUTOMATIC DEPENDENT SURVEILLANCE (ADS) SYSTEM RESEARCH AND DEVELOPMENT." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/607495.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California<br>This paper presents the basic concept, construction principle and implementation work for the Automatic Dependent Surveillance (ADS) system. As a part of ADS system, the ADS message processing system based on PC computer was given more attention. Furthermore, the paper introduces the ADS trial status and points out that the ADS implementation will bring tremendous economical and social efficiency.
APA, Harvard, Vancouver, ISO, and other styles
23

Jungfer, Kim Michael. "Semi automatic generation of CORBA interfaces for databases in molecular biology." Thesis, University College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lau, Anthony Kwok. "A digital oscilloscope and spectrum analyzer for anaysis of primate vocalizations : master's research project report." Scholarly Commons, 1989. https://scholarlycommons.pacific.edu/uop_etds/2177.

Full text
Abstract:
The major objective of this report is to present information regarding the design, construction, and testing of the Digital Oscilloscope Peripheral which allows the IBM Personal Computer (IBM PC) to be used as both a digital oscilloscope and a spectrum analyzer. The design and development of both hardware and software are described briefly; however, the test results are analyzed and discussed in great detail. All documents including the circuit diagrams, program flowcharts and listings, and user manual are provided in the appendices for reference. Several different products are referred to in this report; the following lists each one and its respective company: IBM, XT, AT, and PS/2 are registered trademarks of International Business; Machines Corporation.; MS-DOS is a registered trademark of Microsoft Corporation.; and Turbo Basic is a registered trademark of Borland International, Inc.
APA, Harvard, Vancouver, ISO, and other styles
25

Olivier, Brett Gareth. "Simulation and database software for computational systems biology : PySCes and JWS Online." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50449.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2005.<br>ENGLISH ABSTRACT: Since their inception, biology and biochemistry have been spectacularly successful in characterising the living cell and its components. As the volume of information about cellular components continues to increase, we need to ask how we should use this information to understand the functioning of the living cell? Computational systems biology uses an integrative approach that combines theoretical exploration, computer modelling and experimental research to answer this question. Central to this approach is the development of computational models, new modelling strategies and computational tools. Against this background, this study aims to: (i) develop a new modelling package: PySCeS, (ii) use PySCeS to study discontinuous behaviour in a metabolic pathway in a way that was very difficult, if not impossible, with existing software, (iii) develop an interactive, web-based repository (JWS Online) of cellular system models. Three principles that, in our opinion, should form the basis of any new modelling software were laid down: accessibility (there should be as few barriers as possible to PySCeS use and distribution), flexibility (pySCeS should be extendable by the user, not only the developers) and usability (PySCeS should provide the tools we needed for our research). After evaluating various alternatives we decided to base PySCeS on the freely available programming language, Python, which, in combination with the large collection of science and engineering algorithms in the SciPy libraries, would give us a powerful modern, interactive development environment.<br>AFRIKAANSE OPSOMMING: Sedert hul totstandkoming was biologie en, meer spesifiek, biochemie uiters suksesvol in die karakterisering van die lewende sel se komponente. Steeds groei die hoeveelheid informasie oor die molekulêre bestanddele van die sel daagliks; ons moet onself dus afvra hoe ons hierdie informasie kan integreer tot 'n verstaanbare beskrywing van die lewende sel se werking. Om dié vraag te beantwoord gebruik rekenaarmatige sisteembiologie 'n geïntegreerde benadering wat teorie, rekenaarmatige modellering en eksperimenteeIe navorsing kombineer. Sentraal tot die benadering is die ontwikkeling van nuwe modelle, strategieë vir modellering, en sagteware. Teen hierdie agtergrond is die hoofdoelstelling van hierdie projek: (i) die ontwikkeling van 'n nuwe modelleringspakket, PySCeS (ii) die benutting van PySCeS om diskontinue gedrag in n metaboliese sisteem te bestudeer (iets wat met die huidiglik beskikbare sagteware redelik moeilik is), (en iii) die ontwikkeling vann interaktiewe, internet-gebaseerde databasis van sellulêre sisteem modelle, JWS Online. Ons is van mening dat nuwe sagteware op drie belangrike beginsels gebaseer behoort te wees: toeganklikheid (die sagteware moet maklik bekombaar en bruikbaar wees), buigsaamheid (die gebruiker moet self PySCeS kan verander en ontwikkel) en bruikbaarheid (al die funksionalitiet wat ons vir ons navorsing nodig moet in PySCeS ingebou wees). Ons het verskeie opsies oorweeg en besluit om die vrylik verkrygbare programmeringstaal, Python, in samehang die groot kolleksie wetenskaplike algoritmes, SciPy, te gebruik. Hierdie kombinasie verskaf n kragtige, interaktiewe ontwikkelings- en gebruikersomgewing. PySCeS is ontwikkel om onder beide die Windows en Linux bedryfstelsels te werk en, meer spesifiek, om gebruik te maak van 'n 'command line interface'. Dit beteken dat PySCeS op enige interaktiewe rekenaar-terminaal Python ondersteun sal werk. Hierdie eienskap maak ook moontlik die gebruik van PySCeS as 'n modelleringskomponent in 'n groter sagteware pakket onder enige bedryfstelsel wat Python ondersteun. PySCeS is op 'n modulere ontwerp gebaseer, wat dit moontlik vir die eindgebruiker maak om die sagteware se bronkode verder te ontwikkel. As 'n toepassing is PySCeS gebruik om die oorsaak van histeretiese gedrag van 'n lineêre, eindproduk-geïnhibeerde metaboliese pad te ondersoek. Ons het hierdie interessante gedrag in 'n vorige studie ontdek, maar kon nie, met die sagteware wat op daardie tydstip tot ons beskikking was, hierdie studie voortsit nie. Met PySCeS se ingeboude vermoë om parameter kontinuering te doen, kon ons die oorsake van hierdie diskontinuë gedrag volledig karakteriseer. Verder het ons 'n nuwe metode ontwikkel om hierdie gedrag te visualiseer as 'n interaksie tussen die volledige sisteem se subkomponente. Tydens PySCeS se ontwikkeling het ons opgemerk dat dit baie moeilik was om metaboliese modelle wat in die literature gepubliseer is te herbou en te bestudeer. Hierdie situasie is grotendeels die gevolg van die feit dat nêrens 'n sentrale databasis vir metaboliese modelle bestaan nie (soos dit wel bestaan vir genomiese data of proteïen strukture). Die JWS Online databasis is spesifiek ontwikkel om hierdie leemte te vul. JWS Online maak dit vir die gebruiker moontlik om, via die internet en sonder die installasie van enige gespesialiseerde modellerings sagteware, gepubliseerde modelle te bestudeer en ook af te laai vir gebruik met ander modelleringspakkette soos bv. PySCeS. JWS Online het alreeds 'n onmisbare hulpbron vir sisteembiologiese navorsing en onderwys geword.
APA, Harvard, Vancouver, ISO, and other styles
26

Xiang, Lu, and 项路. "Finding phenotype related pathways via biological networks comparison." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B4715262X.

Full text
Abstract:
 Why some species (or strains of a species) exhibit certain phenotypes (e.g. aerobic, anaerobic, pathogenic etc.) while the others do not is an important question to be answered. Apart from the conventional genomic study, studying the metabolism of the two groups of species may discover the corresponding pathways that are conserved in one group but not in the other. However, only a few tools provide functions to compare two groups of metabolic networks which are usually limited to the reaction level, not the pathway level. In this dissertation, a problem named DMP (Differentiating Metabolic Pathway) problem was formed. Given two groups of metabolic networks, it aims at finding conserved pathways exist in first group, but not the second group. The problem also captures the mutation in similar pathways and derives a measurement (p-value and e-score) for evaluating the significance of the pathways. An algorithm, DMPFinder, was developed to solve the DMP problem. Experimental results show that DMPFinder is able to identify pathways that are critical for the first group to exhibit a certain phenotype which is absent in the other group. Some of these pathways cannot be identified by other tools which only consider reaction level or do not take into account possible mutations among species.<br>published_or_final_version<br>Computer Science<br>Master<br>Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Xinjie, and 朱信杰. "START : a parallel signal track analytical research tool for flexible and efficient analysis of genomic data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2015. http://hdl.handle.net/10722/211136.

Full text
Abstract:
Signal Track Analytical Research Tool (START), is a parallel system for analyzing large-scale genomic data. Currently, genomic data analyses are usually performed by using custom scripts developed by individual research groups, and/or by the integrated use of multiple existing tools (such as BEDTools and Galaxy). The goals of START are 1) to provide a single tool that supports a wide spectrum of genomic data analyses that are commonly done by analysts; and 2) to greatly simplify these analysis tasks by means of a simple declarative language (STQL) with which users only need to specify what they want to do, rather than the detailed computational steps as to how the analysis task should be performed. START consists of four major components: 1) A declarative language called Signal Track Query Language (STQL), which is a SQL-like language we specifically designed to suit the needs for analyzing genomic signal tracks. 2) A STQL processing system built on top of a large-scale distributed architecture. The system is based on the Hadoop distributed storage and the MapReduce Big Data processing framework. It processes each user query using multiple machines in parallel. 3) A simple and user-friendly web site that helps users construct and execute queries, upload/download compressed data files in various formats, man-age stored data, queries and analysis results, and share queries with other users. It also provides a complete help system, detailed specification of STQL, and a large number of sample queries for users to learn STQL and try START easily. Private files and queries are not accessible by other users. 4) A repository of public data popularly used for large-scale genomic data analysis, including data from ENCODE and Roadmap Epigenomics, that users can use in their analyses.<br>published_or_final_version<br>Computer Science<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
28

Boardman, Anelda Philine. "Assessment of genome visualization tools relevant to HIV genome research: development of a genome browser prototype." Thesis, University of the Western Cape, 2004. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_3632_1185446929.

Full text
Abstract:
<p>Over the past two decades of HIV research, effective vaccine candidates have been elusive. Traditionally viral research has been characterized by a gene -by-gene approach, but in the light of the availability of complete genome sequences and the tractable size of the HIV genome, a genomic approach may improve insight into the biology and epidemiology of this virus. A genomic approach to finding HIV vaccine candidates can be facilitated by the use of genome sequence visualization. Genome browsers have been used extensively by various groups to shed light on the biology and evolution of several organisms including human, mouse, rat, Drosophila and C.elegans. Application of a genome browser to HIV genomes and related annotations can yield insight into forces that drive evolution, identify highly conserved regions as well as regions that yields a strong immune response in patients, and track mutations that appear over the course of infection. Access to graphical representations of such information is bound to support the search for effective HIV vaccine candidates. This study aimed to answer the question of whether a tool or application exists that can be modified to be used as a platform for development of an HIV visualization application and to assess the viability of such an implementation. Existing applications can only be assessed for their suitability as a basis for development of an HIV genome browser once a well-defined set of assessment criteria has been compiled.</p>
APA, Harvard, Vancouver, ISO, and other styles
29

Zeng, Shuai, and 曾帥. "Predicting functional impact of nonsynonymous mutations by quantifying conservation information and detect indels using split-read approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/198818.

Full text
Abstract:
The rapidly developing sequencing technology has brought up an opportunity to scientists to look into the detailed genotype information in human genome. Computational programs have played important roles in identifying disease related genomic variants from huge amount of sequencing data. In the past years, a number of computational algorithms have been developed, solving many crucial problems in sequencing data analysis, such as mapping sequencing reads to genome and identifying SNPs. However, many difficult and important issues are still expecting satisfactory solutions. A key challenge is identifying disease related mutations in the background of non-pathogenic polymorphisms. Another crucial problem is detecting INDELs especially the long deletions under the technical limitations of second generation sequencing technology. To predict disease related mutations, we developed a machine learning-based (Random forests) prediction tool, EFIN (Evaluation of Functional Impact of Nonsynonymous mutations). We build A Multiple Sequence Alignment (MSA) for a querying protein with its homologous sequences. MSA is later divided into different blocks according to taxonomic information of the sequences. After that, we quantified the conservation in each block using a number of selected features, for example, entropy, a concept borrowed from information theory. EFIN was trained by Swiss-Prot and HumDiv datasets. By a series of fair comparisons, EFIN showed better results than the widely-used algorithms in terms of AUC (Area under ROC curve), accuracy, specificity and sensitivity. The web-based database is provided to worldwide user at paed.hku.hk/efin. To solve the second problem, we developed Linux-based software, SPLindel that detects deletions (especially long deletions) and insertions using second generation sequencing data. For each sample, SPLindel uses split-read method to detect the candidate INDELs by building alternative references to go along with the reference sequences. And then we remap all the relevant reads using both original references and alternative allele references. A Bayesian model integrating paired-end information was used to assign the reads to the most likely locations on either the original reference allele or the alternative allele. Finally we count the number of reads that support the alternative allele (with insertion or deletions comparing to the original reference allele) and the original allele, and fit a beta-binomial mixture model. Based on this model, the likelihood for each INDEL is calculated and the genotype is predicted. SPLindel runs about the same speed as GATK and DINDEL, but much faster than DINDEL. SPLindel obtained very similar results as GATK and DINDEL for the INDELs of size 1-15 bps, but is much more effective in detecting INDELs of larger size. Using machine learning method and statistical modeling technology, we proposed the tools to solve these two important problems in sequencing data analysis. This study will help identify novel damaging nsSNPs more accurately and efficiently, and equip researcher with more powerful tool in identifying INDELs, especially long deletions. As more and more sequencing data are generated, methods and tools introduced in this thesis may help us extract useful information to facilitate identification of causal mutations to human diseases.<br>published_or_final_version<br>Paediatrics and Adolescent Medicine<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
30

Archer, Emory Scott. "Development of graphical software tools for molecular biology." Thesis, Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19974218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Coursey, William C. "A research experiment to evaluate the acceptability of microcomputer software documentation." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/63969.

Full text
Abstract:
<p>Microcomputer software users require varying degrees of instructional assistance to effectively operate the software they purchase. Chapter I recognizes that this demand for quality documentation places a burden upon software suppliers to expend additional time, energy, and money to satisfy users. This research recommends a set of procedural guidelines for microcomputer software suppliers to follow as a means of supplementing basic documentation techniques.</p> <p>Literature regarding microcomputer software documentation is an item of increasing demand in today's technical marketplace. The literature review, Chapter II, reveals that the most significant improvement in the documentation process has been the development of two specific reference standards, physical layout and instructional components.</p> <p>Chapter III describes the research experiment used in obtaining information regarding the documentation associated with two current microcomputer word processing programs. Four university students provided background information regarding the personal characteristics, attributes, associated with a given user population.</p> <p>The research experiment evolved from a comprehensive documentation review to a structured data collection process. Chapter IV indicates that the discrepancy between actual and expected research gains justifies improving data collection techniques and recommending specific procedural guidelines for future documentation reviews.</p> <p>The final chapter provides a detailed analysis of the research experiment and conclusions related to the documentation's effectiveness. Additionally, it proposes procedural guidelines designed to improve the experiment's data collection techniques. These guidelines can help future documentation writers more accurately gauge user capabilities and limitations. </p><br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Danks, Jacob R. "Algorithm Optimizations in Genomic Analysis Using Entropic Dissection." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804921/.

Full text
Abstract:
In recent years, the collection of genomic data has skyrocketed and databases of genomic data are growing at a faster rate than ever before. Although many computational methods have been developed to interpret these data, they tend to struggle to process the ever increasing file sizes that are being produced and fail to take advantage of the advances in multi-core processors by using parallel processing. In some instances, loss of accuracy has been a necessary trade off to allow faster computation of the data. This thesis discusses one such algorithm that has been developed and how changes were made to allow larger input file sizes and reduce the time required to achieve a result without sacrificing accuracy. An information entropy based algorithm was used as a basis to demonstrate these techniques. The algorithm dissects the distinctive patterns underlying genomic data efficiently requiring no a priori knowledge, and thus is applicable in a variety of biological research applications. This research describes how parallel processing and object-oriented programming techniques were used to process larger files in less time and achieve a more accurate result from the algorithm. Through object oriented techniques, the maximum allowable input file size was significantly increased from 200 mb to 2000 mb. Using parallel processing techniques allowed the program to finish processing data in less than half the time of the sequential version. The accuracy of the algorithm was improved by reducing data loss throughout the algorithm. Finally, adding user-friendly options enabled the program to use requests more effectively and further customize the logic used within the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
33

Hsu, Ming-Hsuan. "MICROPROCESSOR-COMPATIBLE NEURAL SIGNAL PROCESSING FOR AN IMPLANTABLE NEURODYNAMIC SENSOR." Case Western Reserve University School of Graduate Studies / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1244237706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Robinson, Jeffrey Brett, University of Western Sydney, of Science Technology and Environment College, and School of Environment and Agriculture. "Understanding and applying decision support systems in Australian farming systems research." THESIS_CSTE_EAG_Robinson_J.xml, 2005. http://handle.uws.edu.au:8081/1959.7/642.

Full text
Abstract:
Decision support systems (DSS) are usually based on computerised models of biophysical and economic systems. Despite early expectations that such models would inform and improve management, adoption rates have been low, and implementation of DSS is now “critical” The reasons for this are unclear and the aim of this study is to learn to better design, develop and apply DSS in farming systems research (FSR). Previous studies have explored the merits of quantitative tools including DSS, and suggested changes leading to greater impact. In Australia, the changes advocated have been: Simple, flexible, low cost economic tools: Emphasis on farmer learning through soft systems approaches: Understanding the socio-cultural contexts of using and developing DSS: Farmer and researcher co-learning from simulation modelling and Increasing user participation in DSS design and implementation. Twenty-four simple criteria were distilled from these studies, and their usefulness in guiding the development and application of DSS were assessed in six FSR case studies. The case studies were also used to better understand farmer learning through models of decision making and learning. To make DSS useful complements to farmers’ existing decision-making repertoires, they should be based on: (i) a decision-oriented development process, (ii) identifying a motivated and committed audience, (iii) a thorough understanding of the decision-makers context, (iv) using learning as the yardstick of success, and (v) understanding the contrasts, contradictions and conflicts between researcher and farmer decision cultures<br>Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
35

Gregory, Michael W. (Michael Walter). "Interrelational Laboratory Information System for Data Storage and Retrieval." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc935708/.

Full text
Abstract:
The necessity for a functional user friendly laboratory data management program has become evident as the quantity of information required for modern scientific research has increased to to titanic proportions. The required union of strong computer power, ease of operation, and adaptability have until recently been outside the realm of most research laboratories. Previous systems, in addition to their high cost, are necessarily complex and require software experts in order to effect any changes that the end user might deem necessary. This study examines the Apple Macintosh computer program Hypercard as an interactive laboratory information system that is user-friendly, cost effective, and adaptable to the changing demands within a modern molecular or microbiology.
APA, Harvard, Vancouver, ISO, and other styles
36

Todes, M. A. "Evaluation parameters for computer aided design of irrigation systems." Doctoral thesis, University of Cape Town, 1987. http://hdl.handle.net/11427/21140.

Full text
Abstract:
The research has entailed the formulation and coding of computer models for the design of pressurized irrigation systems. Particular emphasis has been given to the provision of routines for the evaluation of the expected performance from a designed system. Two separate sets of models have been developed, one for the block or in-field system and one for file mainline netWork. The thesis is presented in three seelions asfollows : * Basic theory, in which the general background to the research is covered. * The models, which includes detailed descriptions of both the design models and the computer programs. * Applications, in which several test casesof both sets of models are reported.
APA, Harvard, Vancouver, ISO, and other styles
37

Flöter, André. "Analyzing biological expression data based on decision tree induction." Phd thesis, Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2006/641/.

Full text
Abstract:
<P>Modern biological analysis techniques supply scientists with various forms of data. One category of such data are the so called "expression data". These data indicate the quantities of biochemical compounds present in tissue samples.</P> <P>Recently, expression data can be generated at a high speed. This leads in turn to amounts of data no longer analysable by classical statistical techniques. Systems biology is the new field that focuses on the modelling of this information.</P> <P>At present, various methods are used for this purpose. One superordinate class of these meth­ods is machine learning. Methods of this kind had, until recently, predominantly been used for classification and prediction tasks. This neglected a powerful secondary benefit: the ability to induce interpretable models.</P> <P>Obtaining such models from data has become a key issue within Systems biology. Numerous approaches have been proposed and intensively discussed. This thesis focuses on the examination and exploitation of one basic technique: decision trees.</P> <P>The concept of comparing sets of decision trees is developed. This method offers the pos­sibility of identifying significant thresholds in continuous or discrete valued attributes through their corresponding set of decision trees. Finding significant thresholds in attributes is a means of identifying states in living organisms. Knowing about states is an invaluable clue to the un­derstanding of dynamic processes in organisms. Applied to metabolite concentration data, the proposed method was able to identify states which were not found with conventional techniques for threshold extraction.</P> <P>A second approach exploits the structure of sets of decision trees for the discovery of com­binatorial dependencies between attributes. Previous work on this issue has focused either on expensive computational methods or the interpretation of single decision trees ­ a very limited exploitation of the data. This has led to incomplete or unstable results. That is why a new method is developed that uses sets of decision trees to overcome these limitations.</P> <P>Both the introduced methods are available as software tools. They can be applied consecu­tively or separately. That way they make up a package of analytical tools that usefully supplement existing methods.</P> <P>By means of these tools, the newly introduced methods were able to confirm existing knowl­edge and to suggest interesting and new relationships between metabolites.</P><br><P>Neuere biologische Analysetechniken liefern Forschern verschiedenste Arten von Daten. Eine Art dieser Daten sind die so genannten "Expressionsdaten". Sie geben die Konzentrationen biochemischer Inhaltsstoffe in Gewebeproben an.<P> <P>Neuerdings können Expressionsdaten sehr schnell erzeugt werden. Das führt wiederum zu so großen Datenmengen, dass sie nicht mehr mit klassischen statistischen Verfahren analysiert werden können. "System biology" ist eine neue Disziplin, die sich mit der Modellierung solcher Information befasst.</P> <P>Zur Zeit werden dazu verschiedenste Methoden benutzt. Eine Superklasse dieser Methoden ist das maschinelle Lernen. Dieses wurde bis vor kurzem ausschließlich zum Klassifizieren und zum Vorhersagen genutzt. Dabei wurde eine wichtige zweite Eigenschaft vernachlässigt, nämlich die Möglichkeit zum Erlernen von interpretierbaren Modellen.</P> <P>Die Erstellung solcher Modelle hat mittlerweile eine Schlüsselrolle in der "Systems biology" erlangt. Es sind bereits zahlreiche Methoden dazu vorgeschlagen und diskutiert worden. Die vorliegende Arbeit befasst sich mit der Untersuchung und Nutzung einer ganz grundlegenden Technik: den Entscheidungsbäumen.</P> <P>Zunächst wird ein Konzept zum Vergleich von Baummengen entwickelt, welches das Erkennen bedeutsamer Schwellwerte in reellwertigen Daten anhand ihrer zugehörigen Entscheidungswälder ermöglicht. Das Erkennen solcher Schwellwerte dient dem Verständnis von dynamischen Abläufen in lebenden Organismen. Bei der Anwendung dieser Technik auf metabolische Konzentrationsdaten wurden bereits Zustände erkannt, die nicht mit herkömmlichen Techniken entdeckt werden konnten.</P> <P>Ein zweiter Ansatz befasst sich mit der Auswertung der Struktur von Entscheidungswäldern zur Entdeckung von kombinatorischen Abhängigkeiten zwischen Attributen. Bisherige Arbeiten hierzu befassten sich vornehmlich mit rechenintensiven Verfahren oder mit einzelnen Entscheidungsbäumen, eine sehr eingeschränkte Ausbeutung der Daten. Das führte dann entweder zu unvollständigen oder instabilen Ergebnissen. Darum wird hier eine Methode entwickelt, die Mengen von Entscheidungsbäumen nutzt, um diese Beschränkungen zu überwinden.</P> <P>Beide vorgestellten Verfahren gibt es als Werkzeuge für den Computer, die entweder hintereinander oder einzeln verwendet werden können. Auf diese Weise stellen sie eine sinnvolle Ergänzung zu vorhandenen Analyswerkzeugen dar.</P> <P>Mit Hilfe der bereitgestellten Software war es möglich, bekanntes Wissen zu bestätigen und interessante neue Zusammenhänge im Stoffwechsel von Pflanzen aufzuzeigen.</P>
APA, Harvard, Vancouver, ISO, and other styles
38

Hrydziuszko, Olga. "Development of data processing methods for high resolution mass spectrometry-based metabolomics with an application to human liver transplantation." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3700/.

Full text
Abstract:
Direct Infusion (DI) Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry (MS) is becoming a popular measurement platform in metabolomics. This thesis aims to advance the data processing and analysis pipeline of the DI FT-ICR based metabolomics, and broaden its applicability to a clinical research. To meet the first objective, the issue of missing data that occur in a final data matrix containing metabolite relative abundances measured for each sample analysed, is addressed. The nature of these data and their effect on the subsequent data analyses are investigated. Eight common and/or easily accessible missing data estimation algorithms are examined and a three stage approach is proposed to aid the identification of the optimal one. Finally, a novel survival analysis approach is introduced and assessed as an alternative way of missing data treatment prior univariate analysis. To address the second objective, DI FT-ICR MS based metabolomics is assessed in terms of its applicability to research investigating metabolomic changes occurring in liver grafts throughout the human orthotopic liver transplantation (OLT). The feasibility of this approach to a clinical setting is validated and its potential to provide a wealth of novel metabolic information associated with OLT is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
39

Asseburg, Christian. "A Bayesian approach to modelling field data on multi-species predator prey-interactions." Thesis, St Andrews, 2006. https://research-repository.st-andrews.ac.uk/handle/10023/174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Anlind, Alice. "Improvments and evaluation of data processing in LC-MS metabolomics : for application in in vitro systems pharmacology." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-329971.

Full text
Abstract:
The resistance of established medicines is rapidly increasing while the rate of discovery of new drugs and treatments have not increases during the last decades (Spiro et al. 2008). Systems pharmacology can be used to find new combinations or concentrations of established drugs to find new treatments faster (Borisy et al. 2003). A recent study aimed to use high resolution Liquid chromatography–mass spectrometry (LC-MS) for in vitro systems pharmacology, but encountered problems with unwanted variability and batch effects(Herman et al. 2017). This thesis builds on this work by improving the pipeline and comparing alternative methods and evaluating used methods. The evaluation of methods indicated that the data quality was often not improved substantially by complex methods and pipelines. Instead simpler methods such as binning for feature extraction performed best. In-fact many of the preprocessing method commonly used proved to have negative or neglect-able effects on resulting data quality. Finally the recently introduced Optimal Orthonormal System for Discriminant Analysis (OOS-DA) for batch removal was found to be a good alternative to the more complex Combat method.
APA, Harvard, Vancouver, ISO, and other styles
41

Liang, Yiheng. "Computational Methods for Discovering and Analyzing Causal Relationships in Health Data." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804966/.

Full text
Abstract:
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
APA, Harvard, Vancouver, ISO, and other styles
42

Beyers, Ronald Noel. "Selecting educational computer software and evaluating its use, with special reference to biology education." Thesis, Rhodes University, 1992. http://hdl.handle.net/10962/d1003649.

Full text
Abstract:
In the field of Biology there is a reasonable amount of software available for educational use but in the researcher's experience there are few teachers who take the computer into the classroom/laboratory, Teachers will make use of video machines and tape recorders quite happily, but a computer is a piece of apparatus which they are not prepared to use in the classroom/laboratory. This thesis is an attempt to devise an educational package, consisting of a Selection Form and an Evaluation Form, which can be used by teachers to select and evaluate educational software in the field of Biology. The forms were designed specifically for teachers to use in preparation of a computer lesson. The evaluation package also provides the teacher with a means of identifying whether the lesson has achieved its objectives or not. The teacher may also be provided with feedback about the lesson. The data is gathered by means of a questionnaire which the pupils complete. It would appear that teachers are uncertain as regards the purchase of software for their subject from the many catalogues that are available. The evaluation package implemented in this research can be regarded as the beginnings of a data base for the accumulation of information to assist teachers with details on which software to select. Evidence is provided in this thesis for the practical application of the Selection and Evaluation Forms, using Biology software.
APA, Harvard, Vancouver, ISO, and other styles
43

Binder, Janos [Verfasser], and Ursula [Akademischer Betreuer] Kummer. "Integration and visualization of scientific big data to aid systems biology research / Janos Binder ; Akademischer Betreuer: Ursula Kummer." Heidelberg : Universitätsbibliothek Heidelberg, 2014. http://d-nb.info/1181224640/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Offei, Felix. "Denoising Tandem Mass Spectrometry Data." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3218.

Full text
Abstract:
Protein identification using tandem mass spectrometry (MS/MS) has proven to be an effective way to identify proteins in a biological sample. An observed spectrum is constructed from the data produced by the tandem mass spectrometer. A protein can be identified if the observed spectrum aligns with the theoretical spectrum. However, data generated by the tandem mass spectrometer are affected by errors thus making protein identification challenging in the field of proteomics. Some of these errors include wrong calibration of the instrument, instrument distortion and noise. In this thesis, we present a pre-processing method, which focuses on the removal of noisy data with the hope of aiding in better identification of proteins. We employ the method of binning to reduce the number of noise peaks in the data without sacrificing the alignment of the observed spectrum with the theoretical spectrum. In some cases, the alignment of the two spectra improved.
APA, Harvard, Vancouver, ISO, and other styles
45

Kirsch, Matthew Robert. "Signal Processing Algorithms for Analysis of Categorical and Numerical Time Series: Application to Sleep Study Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1278606480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Decker, Jennie Jo. "Display spatial luminance nonuniformities: effects on operator performance and perception." Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54510.

Full text
Abstract:
This dissertation examined the effects of display spatial luminance nonuniformities on operator performance and perception. The objectives of this research were to develop definitions of nonuniformity, develop accurate measurement techniques, determine acceptable levels of nonuniformities, and to develop a predictive model based on user performance data. Nonuniformities were described in terms of spatial frequency, amplitude, display luminance, gradient shape, and number of dimensions. Performance measures included a visual random search task and a subjective measure to determine users' perceptions of the nonuniformities. Results showed that users were able to perform the search task in the presence of appreciable nonuniformities. lt was concluded that current published recommendations for acceptable levels of nonuniformities are adequately specified. Results from the subjective task showed that users were sensitive to the presence of nonuniformities in terms of their perceptions of uniformity. Specifically, results showed that as spatial frequency increased, perceived uniformity ratings increased. That is, users rated nonuniformities to be less noticeable. As amplitude and display luminance increased, the users' ratings of perceived uniformity decreased; that is, they rated the display as being farther from a uniform field. There were no differences in impressions between a sine and triangle gradient shape, while a square gradient shape resulted in lower ratings of perceived uniformity. Few differences were attributed to the dimension (1-D versus 2- D) of the nonuniformity and results were inconclusive because dimension was confounded with the display luminance. Nonuniformities were analyzed using Fourier techniques to determine the amplitudes of the coefficients for each nonuniformity pattern. These physical descriptors were used to develop models to predict users' perceptions of the nonuniformities. A few models yielded good fits of the subjective data. lt was concluded that the method for describing and measuring nonuniformities was successful. Also, the results of this research were in strong concurrence with previous research in the area of spatial vision.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Tran, Thao Thanh Thi. "Genomic data mining for the computational prediction of small non-coding RNA genes." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33966.

Full text
Abstract:
The objective of this research is to develop a novel computational prediction algorithm for non-coding RNA (ncRNA) genes using features computable for any genomic sequence without the need for comparative analysis. Existing comparative-based methods require the knowledge of closely related organisms in order to search for sequence and structural similarities. This approach imposes constraints on the type of ncRNAs, the organism, and the regions where the ncRNAs can be found. We have developed a novel approach for ncRNA gene prediction without the limitations of current comparative-based methods. Our work has established a ncRNA database required for subsequent feature and genomic analysis. Furthermore, we have identified significant features from folding-, structural-, and ensemble-based statistics for use in ncRNA prediction. We have also examined higher-order gene structures, namely operons, to discover potential insights into how ncRNAs are transcribed. Being able to automatically identify ncRNAs on a genome-wide scale is immensely powerful for incorporating it into a pipeline for large-scale genome annotation. This work will contribute to a more comprehensive annotation of ncRNA genes in microbial genomes to meet the demands of functional and regulatory genomic studies.
APA, Harvard, Vancouver, ISO, and other styles
48

Ho, Ngai-lam, and 何毅林. "Algorithms on constrained sequence alignment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30201949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Xia, Tian. "Research on virtualisation technlogy for real-time reconfigurable systems." Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0009/document.

Full text
Abstract:
Cette thèse porte sur l'élaboration d'un micro-noyau original de type hyperviseur, appelé Ker-ONE, permettant de gérer la virtualisation pour des systèmes embarqués sur des plateformes de type SoC et fournissant un environnement pour les machines virtuelles en temps réel. Nous avons simplifié l'architecture du micro-noyau en ne gardant que les caractéristiques essentielles requises pour la virtualisation, et fortement réduit la complexité de la conception du noyau. Sur cette base, nous avons mis en place un mécanisme capable de gérer des ressources reconfigurables dans un système supportant des machines virtuelles. Les accélérateurs matériels reconfigurables sont mappés en tant que dispositifs classiques dans chaque machine. Grâce à une gestion efficace de la mémoire dédiée, nous détectons automatiquement le besoins de ressources et permettons une allocation dynamique des ressources sur FPGA. Suite à diverses expériences et évaluations sur la plateforme Zynq-7000, combinant ARM et ressources FPGA, nous avons montré que Ker-ONE ne dégrade que très peu les performances en termes de temps d'exécution. Les surcoûts engendrés peuvent généralement être ignorés dans les applications réelles. Nous avons également étudié l'ordonnançabilité temps réel dans les machines virtuelles. Les résultats montrent que le respect de l'échéance des tâches temps réel est garanti. Nous avons également démontré que le noyau proposé est capable d'allouer des accélérateurs matériels très rapidement<br>This thesis describes an original micro-kernel that manages virtualization and that provides an environment for real-time virtual machines. We have simplified the micro-kernel architecture by only keeping critical features required for virtualization, and massively reduced the kernel design complexity. Based on this micro-kernel, we have introduced a framework capable of DPR resource management in a virtual machine system. DPR accelerators are mapped as ordinary devices in each VM. Through dedicated memory management, our framework automatically detects the request for DPR resources and allocates them dynamically. According to various experiments and evaluations on the Zynq-7000 platform we have shown that Ker-ONE causes very low virtualization overheads, which can generally be ignored in real applications. We have also studied the real-time schedulability in virtual machines. The results show that RTOS tasks are guaranteed to be scheduled while meeting their intra-VM timing constraints. We have also demonstrated that the proposed framework is capable of virtual machine DPR allocation with low overhead
APA, Harvard, Vancouver, ISO, and other styles
50

El, Shobaki Mohammed. "On-chip monitoring for non-intrusive hardware/software observability." Licentiate thesis, Uppsala : Dept. of Information Technology, Univ, 2004. http://www.it.uu.se/research/reports/lic/2004-004/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!