To see the other types of publications on this topic, follow the link: Medicine – Research – Data processing.

Dissertations / Theses on the topic 'Medicine – Research – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Medicine – Research – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Suwarno, Neihl Omar 1963. "A computer based data acquisition and analysis system for a cardiovascular research laboratory." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/558111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Yiheng. "Computational Methods for Discovering and Analyzing Causal Relationships in Health Data." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804966/.

Full text
Abstract:
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
APA, Harvard, Vancouver, ISO, and other styles
3

Majeke, Lunga. "Preliminary investigation into estimating eye disease incidence rate from age specific prevalence data." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/464.

Full text
Abstract:
This study presents the methodology for estimating the incidence rate from the age specific prevalence data of three different eye diseases. We consider both situations where the mortality may differ from one person to another, with and without the disease. The method used was developed by Marvin J. Podgor for estimating incidence rate from prevalence data. It delves into the application of logistic regression to obtain the smoothed prevalence rates that helps in obtaining incidence rate. The study concluded that the use of logistic regression can produce a meaningful model, and the incidence rates of these diseases were not affected by the assumption of differential mortality.
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, H. (Henglin). "A GQM-based open research data technology evalution method in open research context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221853.

Full text
Abstract:
Open Research Data is gaining popularity nowadays, and various research units and individuals are interested to join this trend. However, due to variety of Open Research Data technologies, they have found it is difficult to select proper ones for their specific requirements. Thus, a method for evaluating of Open Research Data related technologies is developed in this study for researchers to select proper ones. Firstly, the theoretical knowledge of research data sharing and reusing barriers is resulted from a structured literature review. As a result, from the 19 primary studies, 96 instances of existing barriers are identified and classified to seven categories, where four of them are research data sharing barriers and rest of them are reusing barriers. This knowledge is regarded as an important resource for understanding researchers’ requirements on Open Research Data technologies, and utilized to develop the technology evaluation method. Additionally, the Open Research Data Technology Evaluation Method (ORDTEM) is developed basing on the Goal/Question/Metric (GQM) approach and resulted research data sharing and reusing barriers. To develop this method, the GQM approach is adopted as the main skeleton to transform these barriers to measurable criterion. Consequently, the ORDTEM, which is consisting of six GQM evaluation questions and 14 metrics, is developed for researchers to evaluate Open Research Data technologies. Furthermore, to validate the GQM-based ORDTEM, a focus groups study is conducted in a workshop. In the workshop, nine researchers who has the need to participate Open Research Data related activities are recruited to form a focus group to discuss the resulted ORDTEM. And by analysing the content of the discussion, 16 critical opinions are addressed which resulted eight improvements including one refinement on an existing metric and seven new metrics to ORDTEM. Lastly, a testing process of applying ORDTEM to evaluate four selected Open Research Data technologies is implemented also for validating whether it can be used in solving real-world evaluation tasks. And more than the validation, this experiment also results the materials about usage of ORDTEM, which is useful for future adopters. However, more than developing the solution to eliminate the difficulty of selecting technologies for participating Open Research Data movements, this study also provides two additional contributions. For one thing, resulted research data sharing and reusing barriers also direct the future effort to prompt Open Research Data and Open Science. Moreover, the experience of utilizing the GQM approach to transform existing requirements to evaluation criterion is possible to be studied for developing other requirement-specific evaluation.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Chi-hung, and 李志鴻. "An efficient content-based searching engine for medical imagedatabase." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31215506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lynch, Kevin John. "Data manipulation in collaborative research systems." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184923.

Full text
Abstract:
This dissertation addresses data manipulation in collaborative research systems, including what data should be stored, the operations to be performed on that data, and a programming interface to effect this manipulation. Collaborative research systems are discussed, and requirements for next-generation systems are specified, incorporating a range of emerging technologies including multimedia storage and presentation, expert systems, and object-oriented database management systems. A detailed description of a generic query processor constructed specifically for one collaborative research system is given, and its applicability to next-generation systems and emerging technologies is examined. Chapter 1 discusses the Arizona Analyst Information System (AAIS), a successful collaborative research system being used at the University of Arizona and elsewhere. Chapter 2 describes the generic query processing approach used in the AAIS, as an efficient, nonprocedural, high-level programmer interface to databases. Chapter 3 specifies requirements for next-generation collaborative research systems that encompass the entire research cycle for groups of individuals working on related topics over time. These requirements are being used to build a next-generation collaborative research system at the University of Arizona called CARAT, for Computer Assisted Research and Analysis Tool. Chapter 4 addresses the underlying data management systems in terms of the requirements specified in Chapter 3. Chapter 5 revisits the generic query processing approach used in the AAIS, in light of the requirements of Chapter 3, and the range of data management solutions described in Chapter 4. Chapter 5 demonstrates the generic query processing approach as a viable one, for both the requirements of Chapter 3 and the DBMSs of Chapter 4. The significance of this research takes several forms. First, Chapters 1 and 3 provide detailed views of a current collaborative research system, and of a set of requirements for next-generation systems based on years of experience both using and building the AAIS. Second, the generic query processor described in Chapters 2 and 5 is shown to be an effective, portable programming language to database interface, ranging across the set of requirements for collaborative research systems as well as a number of underlying data management solutions.
APA, Harvard, Vancouver, ISO, and other styles
7

Woldeselassie, Tilahun. "A simple microcomputer-based nuclear medicine data processing system design and performance testing." Thesis, University of Aberdeen, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316066.

Full text
Abstract:
This thesis investigates the feasibility of designing a simple nuclear medicine data processing system based on an inexpensive microcomputer system, which is affordable to small hospitals and to developing countries where resources are limited. Since the main need for a computer is to allow dynamic studies to be carried out, the relevant criteria for choosing the computer are its speed and memory capacity. The benchmark chosen for these criteria is renography, one of the commonest nuclear medicine procedures. The Acorn Archimedes model 310 microcomputer was found to meet these requirements, and a suitable camera-computer interface has been designed. Because of the need for ensuring that the gain and offset controls of the interface are set optimally before connecting to the camera, it was necessary to design a circuit which produces a test pattern on the screen for use during this operation. Having also developed and tested the data acquisition and image display software successfully, atttention was concentrated on finding ways of characterising and measuring the performance of the computer interface and the display device, two important areas which have been largely neglected in the quality control of camera-computer systems. One of the characteristics of the interface is its deadtime. A procedure has been outlined for measuring this by means of a variable frequency pulse generator and also for interpreting the data correctly. A theoretical analysis of the way in which the interface deadtime affects the overall count rate performance of the system has also been provided. The spatial linearity, resolution and uniformity characteristics of the interface are measured using a special dual staircase generator circuit designed to simulate the camera position and energy signals. The test pattern set up on the screen consists of an orthogonal grid of points which can be used for a visual assessment of linearity, while analysis of the data in memory enables performance indices for resolution, linearity and uniformity to be computed. The thesis investigates the performance characteristics of display devices by means of radiometric measurements of screen luminance. These reveal that the relationship between screen luminance and display grey level value can be taken as quadratic. Characterisation of the display device in this way enables software techniques to be employed to ensure that screen luminance is a linear function of display grey level value; screen luminance measurements, coupled with film density measurements, are also used to optimise the settings of the display controls for using the film in the linear range of its optical densities. This in turn ensures that film density is a linear function of grey level value. An alternative approach for correcting for display nonlinearity is by means of an electronic circuit described in this thesis. Intensity coding schemes for improving the quality of grey scale images can be effective only if distortion due to the display device is corrected for. The thesis also draws attention to significant variations in film density which may have their origins in nonuniformities in the display screen, the recording film, or in the performance of the film processor. The work on display devices has been published in two papers.
APA, Harvard, Vancouver, ISO, and other styles
8

Herzberg, Nico, and Mathias Weske. "Enriching raw events to enable process intelligence : research challenges." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6401/.

Full text
Abstract:
Business processes are performed within a company’s daily business. Thereby, valuable data about the process execution is produced. The quantity and quality of this data is very dependent on the process execution environment that reaches from predominantly manual to fullautomated. Process improvement is one essential cornerstone of business process management to ensure companies’ competitiveness and relies on information about the process execution. Especially in manual process environments data directly related to the process execution is rather sparse and incomplete. In this paper, we present an approach that supports the usage and enrichment of process execution data with context data – data that exists orthogonally to business process data – and knowledge from the corresponding process models to provide a high-quality event base for process intelligence subsuming, among others, process monitoring, process analysis, and process mining. Further, we discuss open issues and challenges that are subject to our future work.
Die wertschöpfenden Tätigkeiten in Unternehmen folgen definierten Geschäftsprozessen und werden entsprechend ausgeführt. Dabei werden wertvolle Daten über die Prozessausführung erzeugt. Die Menge und Qualität dieser Daten ist sehr stark von der Prozessausführungsumgebung abhängig, welche überwiegend manuell als auch vollautomatisiert sein kann. Die stetige Verbesserung von Prozessen ist einer der Hauptpfeiler des Business Process Managements, mit der Aufgabe die Wettbewerbsfähigkeit von Unternehmen zu sichern und zu steigern. Um Prozesse zu verbessern muss man diese analysieren und ist auf Daten der Prozessausführung angewiesen. Speziell bei manueller Prozessausführung sind die Daten nur selten direkt zur konkreten Prozessausführung verknüpft. In dieser Arbeit präsentieren wir einen Ansatz zur Verwendung und Anreicherung von Prozessausführungsdaten mit Kontextdaten – Daten die unabhängig zu den Prozessdaten existieren – und Wissen aus den dazugehörigen Prozessmodellen, um ein hochwertige Event- Datenbasis für Process Intelligence Anwendungen, wie zum Beispiel Prozessmonitoring, Prozessanalyse und Process Mining, sicherstellen zu können. Des Weiteren zeigen wir offene Fragestellungen und Herausforderungen auf, welche in Zukunft Gegenstand unserer Forschung sein werden.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Hui, and 朱暉. "Deformable models and their applications in medical image processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31238075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Halpin, Ross William. "A history of concern the ethical dilemma of using Nazi medical research data in contemporary medical and scientific research /." University of Sydney, 2008. http://hdl.handle.net/2123/4010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hagedorn, Benjamin, Michael Schöbel, Matthias Uflacker, Flavius Copaciu, and Nikola Milanovic. "Proceedings of the fall 2006 workshop of the HPI research school on service-oriented systems engineering." Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2009/3305/.

Full text
Abstract:
1. Design and Composition of 3D Geoinformation Services Benjamin Hagedorn 2. Operating System Abstractions for Service-Based Systems Michael Schöbel 3. A Task-oriented Approach to User-centered Design of Service-Based Enterprise Applications Matthias Uflacker 4. A Framework for Adaptive Transport in Service- Oriented Systems based on Performance Prediction Flavius Copaciu 5. Asynchronicity and Loose Coupling in Service-Oriented Architectures Nikola Milanovic
APA, Harvard, Vancouver, ISO, and other styles
12

Olivier, Hannelore. "Musical networks : the case for a neural network methodology in advertisement music research." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/16618.

Full text
Abstract:
Thesis (M.Mus.)--University of Stellenbosch, 2005.
ENGLISH ABSTRACT: Countless scientists had been struggling for centuries to find a significant connection between cognition, emotion and reasoning – resulting in today’s rather embarrassingly imperfect understanding of even the most basic human cognition. We should apprehend that it is unlikely that major breakthroughs in the Cognitive Sciences, Psychology, Sociology or the Medical Sciences will elucidate everything about the human brain and -behaviour in the very near future. Realizing this, it is realistic that we should transfer our attention to things that we do know and understand, and reconsider the power that lies in the integration of results and an interdisciplinary perspective in research. Using the tools we have to our disposal today – digital tools such as ANNs which did not exist a few decades before – this is actually readily viable today. This thesis demonstrates that it is possible to break the traditional boundaries that have periodically prevented the Humanities and the Natural Sciences to join forces towards a greater understanding of human beings. By using ANNs, we are able to merge data from any subfield within the Humanities and Natural Sciences in a single study. The results, interpretations and applications which could develop from such a study would certainly be more inclusive than those derived from research conducted in one or two of these fields in isolation. Sufficient evidence is provided in this dissertation to support a methodology which employs an artificial neural network to assist with decision-making processes related to the choice of advertisement music. The main objective of this endeavour is to establish the feasibility of combining data from many diverse fields, in the creation of an ANN that can be helpful in research regarding South African advertisement music. The thesis explores the notion that knowledge from many interdisciplinary study fields ought to play a leading role in the creation and assessment of effective, target-group-specific advertisement music. In obtaining this goal, it examines the probability of producing a computer-based tool which can assist people working in the advertising industry to obtain an educated match between product, consumer, and advertisement music. Taking a multidisciplinary point of view, the author suggests a methodology for the design of a digital tool in the form of a musical network model. It is concluded that, by using this musical network, it is indeed possible to guarantee a functional musically-paired commercial, which effectively addresses its target-group and has an appropriate emotional effect in support of the marketing goals of the advertising agent. The thesis also demonstrates that it is possible to gain new insights regarding a fairly unstudied discipline, without necessarily conducting new research studies in the specified field. The thesis proves that - by taking an interdisciplinary approach and by using ANNs - it is possible to attain new data that is scientifically valid, even in an unacknowledged field such as South African advertisement music. Although the scope of the thesis does not provide for the actual implementation of the musical network, the feasibility of the conceptual idea is thoroughly examined, and it is concluded that the theory in it’s entirely is definitely feasible, and can be implemented in a future study.
AFRIKAANSE OPSOMMING: Vir eeue al probeer wetenskaplikes ‘n betekenisvolle verwantskap tussen denke, emosie en redenasie vind. Nietemin het ons vandag slegs ‘n beperkte begrip van selfs die mees basiese menslike kognisie. Ons moet besef dat dit onwaarskynlik is dat deurbrake in die Kognitiewe Wetenskappe, Sielkunde, Sosiologie of die Mediese Wetenskap in die nabye toekoms die volle funksionaliteit van die menslike brein en gedrag sal bekendmaak. Met inagname hiervan, is ‘n aandagsverskuiwing geoorloof - na die dinge wat ons wel weet en verstaan. Die enorme potensiaal opgesluit in die integrasie van resultate en ‘n interdissiplinêre navorsingsperspektief behoort gevolglik heroorweeg te word. Ons beskik tans oor meer as voldoende digitale hulpbronne, waaronder kunsmatige neurale netwerke, wat wel so ‘n benadering kan bewerkstellig. In hierdie tesis word daar gedemonstreer dat dit moontlik is om die grense wat tradisioneel ‘n samewerking tussen die Geestes- en Natuurwetenskappe beperk het, af te breek - ‘n werkswyse wat noodwendig sal lei tot ‘n beter begrip van die mens. Kunsmatige neurale netwerke maak dit moontlik om navorsingsdata uit die Geestes- en Natuurwetenskappe te kombineer in ‘n enkele onderneming. Die bevindinge, interpretasies en toepasings wat potensieel uit so ‘n metodologie sou kon voortspruit, is sonder twyfel meer omvattend as dié afkomstig vanuit ‘n eendimensionele studie. Voldoende bewyse word deur die loop van hierdie studie voorgehou ter ondersteuning van ‘n kunsmatige neurale netwerk-metodologie in die assistering van besluitnemingsprosesse rakende advertensiemusiek. Die hoofdoelwit van die onderneming is om te toets of die ontwerp van ‘n kunsmatige neurale netwerk - deur die kombinasie van data uit diverse studierigtings - wel geoorloof en funksioneel sou kon wees. Die aanname dat inligting uit ‘n aantal interdissiplinêre studierigtings ‘n prominente rol behoort te speel tydens die skep en beoordeling van effektiewe, teikengroep-gerigte advertensiemusiek, word gevolglik ondersoek. Om hierdie objektief te bewerkstellig, word die waarskynlikheid bestudeer na die ontwerp van ‘n rekenaargebaseerde hulpbron - wat mense in die advertensiewese behulpsaam kan wees om ‘n berekende en ingeligte keuse uit te oefen om produk, verbruiker en advertensiemusiek te laat pas. Die outeur benader die probleem vanuit ‘n multidissiplinêne oogpunt, en stel ‘n werkswyse voor vir die ontwerp van ‘n digitale hulpbron – in die vorm van ‘n musikale netwerk model. Daar word bevind dat - deur die gebruik van die voorgestelde model, dit wel moontlik is om die funksionaliteit van ‘n musiekgepaarde advertensie te verseker. Verder word daar gedemonstreer dat nuwe insigte rakende ‘n grotendeels afgeskeepte studierigting soos Suid-Afrikaanse advertensiemusiek moeiteloos bekom kan word, sonder om noodwendig navorsing binne die spesifieke gebied te loods. Laasgenoemde is doenbaar deur ‘n interdissiplinêre navorsingsbenadering, gekombineerd met ‘n kunsmatige neurale netwerk-metodologie. Die omvang van hierdie studie maak nie voorsiening vir die implementering van die musikale netwerk nie. Nietemin word die werkbaarheid van die konseptuele idee in diepte ondersoek, met die gevolgtrekking dat die teorie in sy geheel sonder twyfel prakties is, en in ‘n toekomstige studie geïmplementeer kan word.
APA, Harvard, Vancouver, ISO, and other styles
13

Ford, Ralph M. (Ralph Michael) 1965. "Computer-aided analysis of medical infrared images." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276986.

Full text
Abstract:
Thermography is a useful tool for analyzing spinal nerve root irritation, but interpretation of digital infrared images is often qualitative and subjective. A new quantitative, computer-aided method for analyzing thermograms, utilizing the human dermatome map, is presented. Image processing and pattern recognition principles needed to accomplish this goal are discussed. Algorithms for segmentation, boundary detection and interpretation of thermograms are presented. An interactive, user-friendly program to perform this analysis has been developed. Due to the relatively large number of images in an exam, speed and simplicity were emphasized in algorithm development. The results obtained correlate well with clinical data and show promise for aiding the diagnosis of spinal nerve root irritation.
APA, Harvard, Vancouver, ISO, and other styles
14

Kesterton, Anthony James. "The synthesis of sound with application in a MIDI environment." Thesis, Rhodes University, 1991. http://hdl.handle.net/10962/d1006701.

Full text
Abstract:
The wide range of options for experimentation with the synthesis of sound are usually expensive, difficult to obtain, or limit the experimenter. The work described in this thesis shows how the IBM PC and software can be combined to provide a suitable platform for experimentation with different synthesis techniques. This platform is based on the PC, the Musical Instrument Digital Interface (MIDI) and a musical instrument called a digital sampler. The fundamental concepts of sound are described, with reference to digital sound reproduction. A number of synthesis techniques are described. These are evaluated according to the criteria of generality, efficiency and control. The techniques discussed are additive synthesis, frequency modulation synthesis, subtractive synthesis, granular synthesis, resynthesis, wavetable synthesis, and sampling. Spiral synthesis, physical modelling, waveshaping and spectral interpolation are discussed briefly. The Musical Instrument Digital Interface is a standard method of connecting digital musical instruments together. It is the MIDI standard and equipment conforming to that standard that makes this implementation of synthesis techniques possible. As a demonstration of the PC platform, additive synthesis, frequency modulation synthesis, granular synthesis and spiral synthesis have been implemented in software. A PC equipped with a MIDI interface card is used to perform the synthesis. The MIDI protocol is used to transmit the resultant sound to a digital sampler. The INMOS transputer is used as an accelerator, as the calculation of a waveform using software is a computational intensive process. It is concluded that sound synthesis can be performed successfully using a PC and the appropriate software, and utilizing the facilities provided by a MIDI environment including a digital sampler.
APA, Harvard, Vancouver, ISO, and other styles
15

Chao, Sam. "Novel data mining methodologies for medical data processing and application on i+diagnostic workbench." Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1872954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pawlowski, Colin. "Machine learning for problems with missing and uncertain data with applications to personalized medicine." Thesis, Massachusetts Institute of Technology, 2019.

Find full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 205-215).
When we try to apply statistical learning in real-world applications, we frequently encounter data which include missing and uncertain values. This thesis explores the problem of learning from missing and uncertain data with a focus on applications in personalized medicine. In the first chapter, we present a framework for classification when data is uncertain that is based upon robust optimization. We show that adding robustness in both the features and labels results in tractable optimization problems for three widely used classification methods: support vector machines, logistic regression, and decision trees. Through experiments on 75 benchmark data sets, we characterize the learning tasks for which adding robustness provides the most value. In the second chapter, we develop a family of methods for missing data imputation based upon predictive methods and formal optimization.
We present formulations for models based on K-nearest neighbors, support vector machines, and decision trees, and we develop an algorithm OptImpute to find high quality solutions which scales to large data sets. In experiments on 84 benchmark data sets, we show that OptImpute outperforms state-of-the-art methods in both imputation accuracy and performance on downstream tasks. In the third chapter, we develop MedImpute, an extension of OptImpute specialized for imputing missing values in multivariate panel data. This method is tailored for data sets that have multiple observations of the same individual at different points in time. In experiments on the Framingham Heart Study and Dana Farber Cancer Institute electronic health record data, we demonstrate that MedImpute improves the accuracy of models predicting 10-year risk of stroke and 60-day risk of mortality for late-stage cancer patients.
In the fourth chapter, we develop a method for tensor completion which leverages noisy side information available on the rows and/or columns of the tensor. We apply this method to the task of predicting anti-cancer drug response at particular dosages. We demonstrate significant gains in out-of-sample accuracy filling in missing values on two large-scale anticancer drug screening data sets with genomic side information.
by Colin Pawlowski.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
APA, Harvard, Vancouver, ISO, and other styles
17

Kishore, Annapoorni. "AN INTERNSHIP WITH ENVIRONMENTAL SYSTEMS RESEARCH INSTITUTE." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1209153230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhao, Guang, and 趙光. "Automatic boundary extraction in medical images based on constrained edge merging." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chou, Chuan-Ting. "Traditional Chinese medicine on-line diagnosis system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3182.

Full text
Abstract:
The project developed a web-based application that provides a user-friendly interface to assist practitioners of traditional Chinese medicine in determining the correct diagnosis. Traditional Chinese Medicine On-line Diagnosis System (TCMODS) allows a diagnostician to enter a patient's symptoms using a series of questionnaires to determine health status, which will then be stored in the database as part of the patient's medical records. The database will also differentiate among the patterns of syndromes known in traditional Chinese medicine and search and match these with the patient's data to the known uses of Chinese herbs. TCMODS will then generate that patient's medical record, including the symptoms of the ailment, the syndrome, and a prescription. User identification and access privileges were differentiated in order to maintain the integrity of the patient medical data and the information needed to make the diagnoses. The project was designed to function across platforms and was written using HTML, JSP, and MySQL.
APA, Harvard, Vancouver, ISO, and other styles
20

Boyle, John K. "Performance Metrics for Depth-based Signal Separation Using Deep Vertical Line Arrays." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2198.

Full text
Abstract:
Vertical line arrays (VLAs) deployed below the critical depth in the deep ocean can exploit reliable acoustic path (RAP) propagation, which provides low transmission loss (TL) for targets at moderate ranges, and increased TL for distant interferers. However, sound from nearby surface interferers also undergoes RAP propagation, and without horizontal aperture, a VLA cannot separate these interferers from submerged targets. A recent publication by McCargar and Zurk (2013) addressed this issue, presenting a transform-based method for passive, depth-based separation of signals received on deep VLAs based on the depth-dependent modulation caused by the interference between the direct and surface-reflected acoustic arrivals. This thesis expands on that work by quantifying the transform-based depth estimation method performance in terms of the resolution and ambiguity in the depth estimate. Then, the depth discrimination performance is quantified in terms of the number of VLA elements.
APA, Harvard, Vancouver, ISO, and other styles
21

Boying, Lu, Zhang Jun, Nie Shuhui, and Huang Xinjian. "AUTOMATIC DEPENDENT SURVEILLANCE (ADS) SYSTEM RESEARCH AND DEVELOPMENT." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/607495.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
This paper presents the basic concept, construction principle and implementation work for the Automatic Dependent Surveillance (ADS) system. As a part of ADS system, the ADS message processing system based on PC computer was given more attention. Furthermore, the paper introduces the ADS trial status and points out that the ADS implementation will bring tremendous economical and social efficiency.
APA, Harvard, Vancouver, ISO, and other styles
22

李友榮 and Yau-wing Lee. "Modelling multivariate survival data using semiparametric models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B4257528X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Long, Yongxian, and 龙泳先. "Semiparametric analysis of interval censored survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45541152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lau, Anthony Kwok. "A digital oscilloscope and spectrum analyzer for anaysis of primate vocalizations : master's research project report." Scholarly Commons, 1989. https://scholarlycommons.pacific.edu/uop_etds/2177.

Full text
Abstract:
The major objective of this report is to present information regarding the design, construction, and testing of the Digital Oscilloscope Peripheral which allows the IBM Personal Computer (IBM PC) to be used as both a digital oscilloscope and a spectrum analyzer. The design and development of both hardware and software are described briefly; however, the test results are analyzed and discussed in great detail. All documents including the circuit diagrams, program flowcharts and listings, and user manual are provided in the appendices for reference. Several different products are referred to in this report; the following lists each one and its respective company: IBM, XT, AT, and PS/2 are registered trademarks of International Business; Machines Corporation.; MS-DOS is a registered trademark of Microsoft Corporation.; and Turbo Basic is a registered trademark of Borland International, Inc.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Xinjie, and 朱信杰. "START : a parallel signal track analytical research tool for flexible and efficient analysis of genomic data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2015. http://hdl.handle.net/10722/211136.

Full text
Abstract:
Signal Track Analytical Research Tool (START), is a parallel system for analyzing large-scale genomic data. Currently, genomic data analyses are usually performed by using custom scripts developed by individual research groups, and/or by the integrated use of multiple existing tools (such as BEDTools and Galaxy). The goals of START are 1) to provide a single tool that supports a wide spectrum of genomic data analyses that are commonly done by analysts; and 2) to greatly simplify these analysis tasks by means of a simple declarative language (STQL) with which users only need to specify what they want to do, rather than the detailed computational steps as to how the analysis task should be performed. START consists of four major components: 1) A declarative language called Signal Track Query Language (STQL), which is a SQL-like language we specifically designed to suit the needs for analyzing genomic signal tracks. 2) A STQL processing system built on top of a large-scale distributed architecture. The system is based on the Hadoop distributed storage and the MapReduce Big Data processing framework. It processes each user query using multiple machines in parallel. 3) A simple and user-friendly web site that helps users construct and execute queries, upload/download compressed data files in various formats, man-age stored data, queries and analysis results, and share queries with other users. It also provides a complete help system, detailed specification of STQL, and a large number of sample queries for users to learn STQL and try START easily. Private files and queries are not accessible by other users. 4) A repository of public data popularly used for large-scale genomic data analysis, including data from ENCODE and Roadmap Epigenomics, that users can use in their analyses.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
26

Marks, Steven Adam. "Nurses' attitudes toward computer use for point-of-care charting." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/2006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Boardman, Anelda Philine. "Assessment of genome visualization tools relevant to HIV genome research: development of a genome browser prototype." Thesis, University of the Western Cape, 2004. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_3632_1185446929.

Full text
Abstract:

Over the past two decades of HIV research, effective vaccine candidates have been elusive. Traditionally viral research has been characterized by a gene -by-gene approach, but in the light of the availability of complete genome sequences and the tractable size of the HIV genome, a genomic approach may improve insight into the biology and epidemiology of this virus. A genomic approach to finding HIV vaccine candidates can be facilitated by the use of genome sequence visualization. Genome browsers have been used extensively by various groups to shed light on the biology and evolution of several organisms including human, mouse, rat, Drosophila and C.elegans. Application of a genome browser to HIV genomes and related annotations can yield insight into forces that drive evolution, identify highly conserved regions as well as regions that yields a strong immune response in patients, and track mutations that appear over the course of infection. Access to graphical representations of such information is bound to support the search for effective HIV vaccine candidates. This study aimed to answer the question of whether a tool or application exists that can be modified to be used as a platform for development of an HIV visualization application and to assess the viability of such an implementation. Existing applications can only be assessed for their suitability as a basis for development of an HIV genome browser once a well-defined set of assessment criteria has been compiled.

APA, Harvard, Vancouver, ISO, and other styles
28

Mazzocco, Thomas. "Toward a novel predictive analysis framework for new-generation clinical decision support systems." Thesis, University of Stirling, 2014. http://hdl.handle.net/1893/21684.

Full text
Abstract:
The idea of developing automated tools able to deal with the complexity of clinical information processing dates back to the late 60s: since then, there has been scope for improving medical care due to the rapid growth of medical knowledge, and the need to explore new ways of delivering this due to the shortage of physicians. Clinical decision support systems (CDSS) are able to aid in the acquisition of patient data and to suggest appropriate decisions on the basis of the data thus acquired. Many improvements are envisaged due to the adoption of such systems including: reduction of costs by faster diagnosis, reduction of unnecessary examinations, reduction of risk of adverse events and medication errors, increase in the available time for direct patient care, improved medications and examination prescriptions, improved patient satisfaction, and better compliance to gold-standard up-to-date clinical pathways and guidelines. Logistic regression is a widely used algorithm which frequently appears in medical literature for building clinical decision support systems: however, published studies frequently have not followed commonly recommended procedures for using logistic regression and substantial shortcomings in the reporting of logistic regression results have been noted. Published literature has often accepted conclusions from studies which have not addressed the appropriateness and accuracy of the statistical analyses and other methodological issues, leading to design flaws in those models and to possible inconsistencies in the novel clinical knowledge based on such results. The main objective of this interdisciplinary work is to design a sound framework for the development of clinical decision support systems. We propose a framework that supports the proper development of such systems, and in particular the underlying predictive models, identifying best practices for each stage of the model’s development. This framework is composed of a number of subsequent stages: 1) dataset preparation insures that appropriate variables are presented to the model in a consistent format, 2) the model construction stage builds the actual regression (or logistic regression) model determining its coefficients and selecting statistically significant variables; this phase is generally preceded by a pre-modelling stage during which model functional forms are hypothesized based on a priori knowledge 3) the further model validation stage investigates whether the model could suffer from overfitting, i.e., the model has a good accuracy on training data but significantly lower accuracy on unseen data, 4) the evaluation stage gives a measure of the predictive power of the model (making use of the ROC curve, which allows to evaluate the predictive power of the model without any assumptions on error costs, and possibly R2 from regressions), 5) misclassification analysis could suggest useful insights into determining where the model could be unreliable, 6) implementation stage. The proposed framework has been applied to three applications on different domains, with a view to improve previous research studies. The first developed model predicts mortality within 28 days of patients suffering from acute alcoholic hepatitis. The aim of this application is to build a new predictive model that can be used in clinical practice to identify patients at greatest risk of mortality in 28 days as they may benefit from aggressive intervention, and to monitor their progress while in hospital. A comparison generated by state of the art tools shows an improved predictive power, demonstrating how an appropriate variables inclusion may result in an overall better accuracy of the model, which increased by 25% following an appropriate variables selection process. The second proposed predictive model is designed to aid the diagnosis of dementia, as clinicians often experience difficulties in the diagnosis of dementia due to the intrinsic complexity of the process and lack of comprehensive diagnostic tools. The aim of this application is to improve on the performance of a recent application of Bayesian belief networks using an alternative approach based on logistic regression. The approach based on statistical variables selection outperformed the model which used variables selected by domain experts in previous studies. Obtained results outperform considered benchmarks by 15%. The third built model predicts the probability of experiencing a certain symptom among common side-effects in patients receiving chemotherapy. The newly developed model includes a pre-modelling stage (which was based on previous research studies) and a subsequent regression. The computed accuracy of results (computed on a daily basis for each cycle of therapy) shows that the newly proposed approach has increased its predictive power by 19% when compared to the previously developed model: this has been obtained by an appropriate usage of available a priori knowledge to pre-model the functional forms. As shown by the proposed applications, different aspects of CDSS development are subject to substantial improvements: the application of the proposed framework to different domains leads to more accurate models than the existing state-of-the-art proposals. The developed framework is capable of helping researchers to identify and overcome possible pitfalls in their ongoing research works, by providing them with best practices for each step of the development process. An impact on the development of future clinical decision support systems is envisaged: the usage of an appropriate procedure in model development will produce more reliable and accurate systems, and will have a positive impact on the newly produced medical knowledge which may eventually be included in standard clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
29

Johnson, Charles Alan 1957. "A CONTROL SYSTEM FOR THE APPLICATION OF SCANNED, FOCUSSED ULTRASOUND IN HYPERTHERMIA CANCER THERAPY." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Coursey, William C. "A research experiment to evaluate the acceptability of microcomputer software documentation." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/63969.

Full text
Abstract:

Microcomputer software users require varying degrees of instructional assistance to effectively operate the software they purchase. Chapter I recognizes that this demand for quality documentation places a burden upon software suppliers to expend additional time, energy, and money to satisfy users. This research recommends a set of procedural guidelines for microcomputer software suppliers to follow as a means of supplementing basic documentation techniques.

Literature regarding microcomputer software documentation is an item of increasing demand in today's technical marketplace. The literature review, Chapter II, reveals that the most significant improvement in the documentation process has been the development of two specific reference standards, physical layout and instructional components.

Chapter III describes the research experiment used in obtaining information regarding the documentation associated with two current microcomputer word processing programs. Four university students provided background information regarding the personal characteristics, attributes, associated with a given user population.

The research experiment evolved from a comprehensive documentation review to a structured data collection process. Chapter IV indicates that the discrepancy between actual and expected research gains justifies improving data collection techniques and recommending specific procedural guidelines for future documentation reviews.

The final chapter provides a detailed analysis of the research experiment and conclusions related to the documentation's effectiveness. Additionally, it proposes procedural guidelines designed to improve the experiment's data collection techniques. These guidelines can help future documentation writers more accurately gauge user capabilities and limitations.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Thomson, Amy. "Biased processing of sleep-related information in children 'at risk' of insomnia : a pilot study & clinical research portfolio." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1256/.

Full text
Abstract:
This study piloted methodology applied in the fields of Major Depressive Disorder, Bipolar Disorder, Panic Disorder and Alcoholism, to investigate attentional bias towards sleep-related stimuli as a factor in the predisposition of insomnia. Following a ‘tired-state induction’ two groups of participants – ‘at risk’ children of adults with insomnia, and control children of good sleepers – completed a sleep-related Emotional Stroop task. Subsequently, they were asked to comment on the content of the Stroop words; whether or not the children reported sleep-related content was recorded. There was no evidence of attentional bias towards sleep-related stimuli in ‘at risk’ children relative to controls. However there was a trend regarding children’s reports of the words’ content; a greater percentage of the ‘at risk’ children reported sleep-related content, than controls. These results do not provide conclusive support for the role of attentional bias in the predisposition of insomnia. The results are discussed in the context of the methodological limitations of the pilot study. Suggestions for future modifications are put forward.
APA, Harvard, Vancouver, ISO, and other styles
32

Song, Lihong. "Medical concept embedding with ontological representations." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/703.

Full text
Abstract:
Learning representations of medical concepts from the Electronic Health Records (EHRs) has been shown effective for predictive analytics in healthcare. The learned representations are expected to preserve the semantic meanings of different medical concepts, which can be treated as features and thus benefit a variety of applications. Medical ontologies have also been explored to be integrated with the EHR data to further enhance the accuracy of various prediction tasks in healthcare. Most of the existing works assume that medical concepts under the same ontological category should share similar representations, which however does not always hold. In particular, the categorizations in the categorical medical ontologies were established with various factors being considered. Medical concepts even under the same ontological category may not follow similar occurrence patterns in the EHR data, leading to contradicting objectives for the representation learning. In addition, these existing works merely utilize the categorical ontologies. Actually, it has been noticed that ontologies containing multiple types of relations are also available. However, studies rarely make use of the diverse types of medical ontologies. In this thesis research, we propose three novel representation learning models for integrating the EHR data and medical ontologies for predictive analytics. To improve the interpretability and alleviate the conflicting objective issue between the EHR data and medical ontologies, we propose techniques to learn medical concepts embeddings with multiple ontological representations. To reduce the reliance on labeled data, we treat the co-occurrence statistics of clinical events as additional training signals, which help us learn good representations even with few labeled data. To leverage the various domain knowledge, we also consider multiple medical ontologies (CCS, ATC and SNOMED-CT) and propose corresponding attention mechanisms so as to take the best advantage of the medical ontologies with better interpretability. Our proposed models can achieve the final medical concept representations which align better with the EHR data. We conduct extensive experiments, and our empirical results prove the effectiveness of the proposed methods. Keywords: Bio/Medicine, Healthcare-AI, Electronic Health Record, Representation Learning, Machine Learning Applications
APA, Harvard, Vancouver, ISO, and other styles
33

Wiréhn, Ann-Britt. "A Data-Rich World : Population‐based registers in healthcare research." Doctoral thesis, Linköpings universitet, Hälsa och samhälle, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10207.

Full text
Abstract:
Advances and integration of information and communication technologies into healthcare systems offer new opportunities to improve public health worldwide. In Sweden, there are already unique possibilities for epidemiological research from registers because of a long tradition of centralized data collection into population-based registers and their allowance for linkage. The growing efficiency of automated digital storage provides growing volumes of archived data that increases the potential of analyses further. The purpose of this thesis can be divided into two parallel themes: illustrations and discussions of the use and usefulness of population-based registers on the one hand, and specific research questions in epidemiology and healthcare research on the other. The research questions are addressed in separate papers. From the Swedish Cancer Registry, 25 years of incidence data on testicular cancer was extracted for a large cohort. Record linkage to survey data on serum cholesterol showed a highly significant positive association, suggesting that elevated serum cholesterol concentration is a risk factor for testicular cancer. Since the finding is the first of its kind and because of wide confidence intervals further studies are needed to confirm the association. Östergötland County council’s administra-tive database (the Care Data Warehouse in Östergötland (CDWÖ)) provided data for preva-lence estimations of four common chronic diseases. The prevalence rate agreed very well with previous estimates for diabetes and fairly well with those for asthma. For hypertension and chronic obstructive pulmonary disease, the observed rates were lower than previous prevalence estimates. Data on several consecutive years covering all healthcare levels are needed to achieve valid prevalence estimates. CDWÖ data was also used to analyse the impact of diabetes on the prevalence of ischemic heart disease. Women had higher diabetes/non-diabetes prevalence rate ratios across all ages. The relative gender difference remained up to the age of 65 years and thereafter decreased considerably. The age-specific direct healthcare cost of diabetes was explored using data from the CDWÖ, the county council’s Cost Per Patient database and the Swedish Prescribed Drug Register. The cost per patient and the relative magnitude of different cost components varied considerably by age, which is important to consider in the future planning of diabetes management. The Cancer Registry was established mainly as a basis for epidemiological surveillance and research, exemplified in this thesis by a study on testicular cancer. In contrast, the newly established and planned healthcare databases in different Swedish counties are mainly for managerial purposes. As is shown in this thesis, these new databases may also be used to address problems in epidemiology and healthcare research.
APA, Harvard, Vancouver, ISO, and other styles
34

Chava, Nalini. "Administrative reporting for a hospital document scanning system." Virtual Press, 1996. http://liblink.bsu.edu/uhtbin/catkey/1014839.

Full text
Abstract:
This thesis will examine the manual hospital document retrieval system and electronic document scanning system. From this examination, requirements will be listed for the Administrative Reporting for the Hospital Document Scanning System which will provide better service and reliability than the previous systems. To assure that the requirements can be met, this will be developed into a working system which is named as the Administrative Reporting for the Hospital Document Scanning System(ARHDSS).
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
35

Hrydziuszko, Olga. "Development of data processing methods for high resolution mass spectrometry-based metabolomics with an application to human liver transplantation." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3700/.

Full text
Abstract:
Direct Infusion (DI) Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry (MS) is becoming a popular measurement platform in metabolomics. This thesis aims to advance the data processing and analysis pipeline of the DI FT-ICR based metabolomics, and broaden its applicability to a clinical research. To meet the first objective, the issue of missing data that occur in a final data matrix containing metabolite relative abundances measured for each sample analysed, is addressed. The nature of these data and their effect on the subsequent data analyses are investigated. Eight common and/or easily accessible missing data estimation algorithms are examined and a three stage approach is proposed to aid the identification of the optimal one. Finally, a novel survival analysis approach is introduced and assessed as an alternative way of missing data treatment prior univariate analysis. To address the second objective, DI FT-ICR MS based metabolomics is assessed in terms of its applicability to research investigating metabolomic changes occurring in liver grafts throughout the human orthotopic liver transplantation (OLT). The feasibility of this approach to a clinical setting is validated and its potential to provide a wealth of novel metabolic information associated with OLT is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
36

Robinson, Jeffrey Brett, University of Western Sydney, of Science Technology and Environment College, and School of Environment and Agriculture. "Understanding and applying decision support systems in Australian farming systems research." THESIS_CSTE_EAG_Robinson_J.xml, 2005. http://handle.uws.edu.au:8081/1959.7/642.

Full text
Abstract:
Decision support systems (DSS) are usually based on computerised models of biophysical and economic systems. Despite early expectations that such models would inform and improve management, adoption rates have been low, and implementation of DSS is now “critical” The reasons for this are unclear and the aim of this study is to learn to better design, develop and apply DSS in farming systems research (FSR). Previous studies have explored the merits of quantitative tools including DSS, and suggested changes leading to greater impact. In Australia, the changes advocated have been: Simple, flexible, low cost economic tools: Emphasis on farmer learning through soft systems approaches: Understanding the socio-cultural contexts of using and developing DSS: Farmer and researcher co-learning from simulation modelling and Increasing user participation in DSS design and implementation. Twenty-four simple criteria were distilled from these studies, and their usefulness in guiding the development and application of DSS were assessed in six FSR case studies. The case studies were also used to better understand farmer learning through models of decision making and learning. To make DSS useful complements to farmers’ existing decision-making repertoires, they should be based on: (i) a decision-oriented development process, (ii) identifying a motivated and committed audience, (iii) a thorough understanding of the decision-makers context, (iv) using learning as the yardstick of success, and (v) understanding the contrasts, contradictions and conflicts between researcher and farmer decision cultures
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
37

O'Riordan, Mary Ann. "Separability of Effects in the Analysis of Complex Observational Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1365080790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Todes, M. A. "Evaluation parameters for computer aided design of irrigation systems." Doctoral thesis, University of Cape Town, 1987. http://hdl.handle.net/11427/21140.

Full text
Abstract:
The research has entailed the formulation and coding of computer models for the design of pressurized irrigation systems. Particular emphasis has been given to the provision of routines for the evaluation of the expected performance from a designed system. Two separate sets of models have been developed, one for the block or in-field system and one for file mainline netWork. The thesis is presented in three seelions asfollows : * Basic theory, in which the general background to the research is covered. * The models, which includes detailed descriptions of both the design models and the computer programs. * Applications, in which several test casesof both sets of models are reported.
APA, Harvard, Vancouver, ISO, and other styles
39

Chau, Ka-ki, and 周嘉琪. "Informative drop-out models for longitudinal binary data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2962714X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wong, Pik-wah Angela, and 黃碧華. "General practitioners' use of computers: a Hong Kong study." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B25101225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

賴旭佑 and Yuk-yau Timothy Lai. "A follow-up study on the levels of and attitudes towards computerisation among doctors in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31971088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Offei, Felix. "Denoising Tandem Mass Spectrometry Data." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3218.

Full text
Abstract:
Protein identification using tandem mass spectrometry (MS/MS) has proven to be an effective way to identify proteins in a biological sample. An observed spectrum is constructed from the data produced by the tandem mass spectrometer. A protein can be identified if the observed spectrum aligns with the theoretical spectrum. However, data generated by the tandem mass spectrometer are affected by errors thus making protein identification challenging in the field of proteomics. Some of these errors include wrong calibration of the instrument, instrument distortion and noise. In this thesis, we present a pre-processing method, which focuses on the removal of noisy data with the hope of aiding in better identification of proteins. We employ the method of binning to reduce the number of noise peaks in the data without sacrificing the alignment of the observed spectrum with the theoretical spectrum. In some cases, the alignment of the two spectra improved.
APA, Harvard, Vancouver, ISO, and other styles
43

Kirsch, Matthew Robert. "Signal Processing Algorithms for Analysis of Categorical and Numerical Time Series: Application to Sleep Study Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1278606480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Decker, Jennie Jo. "Display spatial luminance nonuniformities: effects on operator performance and perception." Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54510.

Full text
Abstract:
This dissertation examined the effects of display spatial luminance nonuniformities on operator performance and perception. The objectives of this research were to develop definitions of nonuniformity, develop accurate measurement techniques, determine acceptable levels of nonuniformities, and to develop a predictive model based on user performance data. Nonuniformities were described in terms of spatial frequency, amplitude, display luminance, gradient shape, and number of dimensions. Performance measures included a visual random search task and a subjective measure to determine users' perceptions of the nonuniformities. Results showed that users were able to perform the search task in the presence of appreciable nonuniformities. lt was concluded that current published recommendations for acceptable levels of nonuniformities are adequately specified. Results from the subjective task showed that users were sensitive to the presence of nonuniformities in terms of their perceptions of uniformity. Specifically, results showed that as spatial frequency increased, perceived uniformity ratings increased. That is, users rated nonuniformities to be less noticeable. As amplitude and display luminance increased, the users' ratings of perceived uniformity decreased; that is, they rated the display as being farther from a uniform field. There were no differences in impressions between a sine and triangle gradient shape, while a square gradient shape resulted in lower ratings of perceived uniformity. Few differences were attributed to the dimension (1-D versus 2- D) of the nonuniformity and results were inconclusive because dimension was confounded with the display luminance. Nonuniformities were analyzed using Fourier techniques to determine the amplitudes of the coefficients for each nonuniformity pattern. These physical descriptors were used to develop models to predict users' perceptions of the nonuniformities. A few models yielded good fits of the subjective data. lt was concluded that the method for describing and measuring nonuniformities was successful. Also, the results of this research were in strong concurrence with previous research in the area of spatial vision.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
45

Xia, Tian. "Research on virtualisation technlogy for real-time reconfigurable systems." Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0009/document.

Full text
Abstract:
Cette thèse porte sur l'élaboration d'un micro-noyau original de type hyperviseur, appelé Ker-ONE, permettant de gérer la virtualisation pour des systèmes embarqués sur des plateformes de type SoC et fournissant un environnement pour les machines virtuelles en temps réel. Nous avons simplifié l'architecture du micro-noyau en ne gardant que les caractéristiques essentielles requises pour la virtualisation, et fortement réduit la complexité de la conception du noyau. Sur cette base, nous avons mis en place un mécanisme capable de gérer des ressources reconfigurables dans un système supportant des machines virtuelles. Les accélérateurs matériels reconfigurables sont mappés en tant que dispositifs classiques dans chaque machine. Grâce à une gestion efficace de la mémoire dédiée, nous détectons automatiquement le besoins de ressources et permettons une allocation dynamique des ressources sur FPGA. Suite à diverses expériences et évaluations sur la plateforme Zynq-7000, combinant ARM et ressources FPGA, nous avons montré que Ker-ONE ne dégrade que très peu les performances en termes de temps d'exécution. Les surcoûts engendrés peuvent généralement être ignorés dans les applications réelles. Nous avons également étudié l'ordonnançabilité temps réel dans les machines virtuelles. Les résultats montrent que le respect de l'échéance des tâches temps réel est garanti. Nous avons également démontré que le noyau proposé est capable d'allouer des accélérateurs matériels très rapidement
This thesis describes an original micro-kernel that manages virtualization and that provides an environment for real-time virtual machines. We have simplified the micro-kernel architecture by only keeping critical features required for virtualization, and massively reduced the kernel design complexity. Based on this micro-kernel, we have introduced a framework capable of DPR resource management in a virtual machine system. DPR accelerators are mapped as ordinary devices in each VM. Through dedicated memory management, our framework automatically detects the request for DPR resources and allocates them dynamically. According to various experiments and evaluations on the Zynq-7000 platform we have shown that Ker-ONE causes very low virtualization overheads, which can generally be ignored in real applications. We have also studied the real-time schedulability in virtual machines. The results show that RTOS tasks are guaranteed to be scheduled while meeting their intra-VM timing constraints. We have also demonstrated that the proposed framework is capable of virtual machine DPR allocation with low overhead
APA, Harvard, Vancouver, ISO, and other styles
46

Ling, Meng-Chun. "Senior health care system." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2785.

Full text
Abstract:
Senior Health Care System (SHCS) is created for users to enter participants' conditions and store information in a central database. When users are ready for quarterly assessments the system generates a simple summary that can be reviewed, modified, and saved as part of the summary assessments, which are required by Federal and California law.
APA, Harvard, Vancouver, ISO, and other styles
47

Chiasson, Mike. "The interaction between context and technology during information systems development (ISD) : action research investigations in two health settings." Thesis, 1996. http://hdl.handle.net/2429/6273.

Full text
Abstract:
Software development and implementation failure is perceived by developers and users as a serious problem. Of every six new software development projects, 2 are abandoned, the average project lasts 50% longer than expected, and 75% of large systems are "operating failures" that are rejected or perform poorly. Design failure contributes to the productivity paradox, where increased investment in information technology (IT) has not correlated with improvements in productivity. Many IS researchers state that further research examining the interaction between technology and context during information system development (ISD) is required. This current study is motivated by these calls for research. The marrying of information systems and health research also raises a second motivation. The deployment and diffusion of IT can contribute to the effective utilization of health resources. Another motivation of the thesis is to explore the effect of information systems on disease prevention, and provide an opportunity to develop and diffuse IT tools that promote health. To address these two motivations, two case studies of ISD in two health studies are described. The first case study involved the initiation and development of an electronic patient record in two outpatient clinics specializing in heart disease prevention and rehabilitation (SoftHeart). The second case study involved the development of a windows-based multimedia software that assists the planning of breast cancer educational and policy programs in communities. The first case study covered four years (Summer of 1992 to Spring of 1996) and the second case covered 1 year (Spring of 1995 to Spring of 1996). The purpose of the thesis is to generate hypotheses for future research in ISD. Both studies employed an "action research" approach where the researcher was directly involved with software design and programming. Data from interviews, meeting minutes, field notes, design and programming notes, and other documentation were collected from both studies and triangulated to provide valid interpretations. Important and illustrative technology-context events are extracted from the cases to uncover processes between technology and context during stages of development. Processes are compared with four theories linking technology and context: technological imperative and organizational imperative (unidirectional), and emergent perspective and social technology (bi-directional). These processes are then combined to reach tentative conclusions about the ISD process. Key findings indicate an interplay between a small number unidirectional processes (organizational and technology imperative) and a large number of bi-directional theories (social technology and emergence). Overall, the emergent perspective described or participated in describing a majority of the processes, given the developer's perspective, extraction and interpretation of these key processes. In both cases, the ISD trajectory was best described as emergent. The result of within case and cross-case analysis is a model integrating the four technology-context theories depending on stakeholder agreement and the adaptability of technology during development and use. Dynamics and change in task, technology, and stakeholder configurations are explained by the deliberate or accidental interaction of new and old stakeholders, technology, ideas, agreements and/or tasks over time. Implications for research and practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
48

Kriske, Jeffery Edward Jr. "A scalable approach to processing adaptive optics optical coherence tomography data from multiple sensors using multiple graphics processing units." Thesis, 2014. http://hdl.handle.net/1805/6458.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Adaptive optics-optical coherence tomography (AO-OCT) is a non-invasive method of imaging the human retina in vivo. It can be used to visualize microscopic structures, making it incredibly useful for the early detection and diagnosis of retinal disease. The research group at Indiana University has a novel multi-camera AO-OCT system capable of 1 MHz acquisition rates. Until this point, a method has not existed to process data from such a novel system quickly and accurately enough on a CPU, a GPU, or one that can scale to multiple GPUs automatically in an efficient manner. This is a barrier to using a MHz AO-OCT system in a clinical environment. A novel approach to processing AO-OCT data from the unique multi-camera optics system is tested on multiple graphics processing units (GPUs) in parallel with one, two, and four camera combinations. The design and results demonstrate a scalable, reusable, extensible method of computing AO-OCT output. This approach can either achieve real time results with an AO-OCT system capable of 1 MHz acquisition rates or be scaled to a higher accuracy mode with a fast Fourier transform of 16,384 complex values.
APA, Harvard, Vancouver, ISO, and other styles
49

"The applications of image processing in biology and relevant data analysis." 2007. http://library.cuhk.edu.hk/record=b5893361.

Full text
Abstract:
Wang, Zexi.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 63-64).
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 0 --- Introduction --- p.1
Chapter 1 --- The Design of the Experiments --- p.4
Chapter 1.1 --- Flies and the Devices --- p.5
Chapter 1.2 --- Parameter Settings and Interested Information --- p.8
Chapter 2 --- Video Processing --- p.11
Chapter 2.1 --- "Videos, Computer Vision and Image Processing" --- p.11
Chapter 2.2 --- Details in Video Processing --- p.14
Chapter 3 --- Data Analysis --- p.20
Chapter 3.1 --- Background --- p.20
Chapter 3.2 --- Outline of Data Analysis in Our Project --- p.22
Chapter 4 --- Effect of the Medicine --- p.25
Chapter 4.1 --- Hypothesis Testing --- p.26
Chapter 4.2 --- Two-sample t Test --- p.28
Chapter 5 --- Significance of the Two Factors --- p.32
Chapter 5.1 --- Background of ANOVA --- p.33
Chapter 5.2 --- The Model of ANOVA --- p.35
Chapter 5.3 --- Two-way ANOVA in Our Data Analysis --- p.42
Chapter 6 --- Regression Model --- p.45
Chapter 6.1 --- Background of Regression Analysis --- p.47
Chapter 6.2 --- Polynomial Regression Models --- p.52
Chapter 6.2.1 --- Background --- p.52
Chapter 6.2.2 --- R2 and adjusted R2 --- p.53
Chapter 6.3 --- Model Verification --- p.58
Chapter 6.4 --- A Simpler Model As the Other Choice --- p.59
Chapter 6.5 --- Conclusions --- p.60
Chapter 7 --- Further Studies --- p.61
Bibliography --- p.62
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Chenkun. "Flexible models of time-varying exposures." Thesis, 2015. http://hdl.handle.net/1805/7938.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
With the availability of electronic medical records, medication dispensing data offers an unprecedented opportunity for researchers to explore complex relationships among longterm medication use, disease progression and potential side-effects in large patient populations. However, these data also pose challenges to existing statistical models because both medication exposure status and its intensity vary over time. This dissertation focused on flexible models to investigate the association between time-varying exposures and different types of outcomes. First, a penalized functional regression model was developed to estimate the effect of time-varying exposures on multivariate longitudinal outcomes. Second, for survival outcomes, a regression spline based model was proposed in the Cox proportional hazards (PH) framework to compare disease risk among different types of time-varying exposures. Finally, a penalized spline based Cox PH model with functional interaction terms was developed to estimate interaction effect between multiple medication classes. Data from a primary care patient cohort are used to illustrate the proposed approaches in determining the association between antidepressant use and various outcomes.
NIH grants, R01 AG019181 and P30 AG10133.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography