To see the other types of publications on this topic, follow the link: Prediction Accuracy.

Dissertations / Theses on the topic 'Prediction Accuracy'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Prediction Accuracy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

GAO, HONGLIANG. "IMPROVING BRANCH PREDICTION ACCURACY VIA EFFECTIVE SOURCE INFORMATION AND PREDICTION ALGORITHMS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3286.

Full text
Abstract:
Modern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic branch prediction scheme includes three major components: a source, an information processor, and a predictor. Traditional works mainly focus on the algorithm for the predictor. In this dissertation, besides novel prediction algorithms, we investigate other components and develop untraditional ways to improve the prediction accuracy. First, we propose an adaptive information processing method to dynamically extract the most effective inputs to maximize the correlation to be exploited by the predictor. Second, we propose a new prediction algorithm, which improves the Prediction by Partial Matching (PPM) algorithm by selectively combining multiple partial matches. The PPM algorithm was previously considered optimal and has been used to derive the upper limit of branch prediction accuracy. Our proposed algorithm achieves higher prediction accuracy than PPM and can be implemented in realistic hardware budget. Third, we discover a new locality existing between the address of producer loads and the outcomes of their consumer branches. We study this address-branch correlation in detail and propose a branch predictor to explore this correlation for long-latency and hard-to-predict branches, which existing branch predictors fail to predict accurately.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
2

Vasudev, R. Sashin, and Ashok Reddy Vanga. "Accuracy of Software Reliability Prediction from Different Approaches." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.

Full text
Abstract:
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years.
svra06@student.bth.se
APA, Harvard, Vancouver, ISO, and other styles
3

Govender, Evandarin. "An intelligent deflection prediction system for machining of flexible components." Thesis, Nottingham Trent University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ilska, Joanna Jadwiga. "Understanding genomic prediction in chickens." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15876.

Full text
Abstract:
Genomic prediction (GP) is a novel tool used for prediction of EBVs by using molecular markers. Within the last decade, GP has been widely introduced into routine evaluations of cattle, pig and sheep populations, however, its application in poultry has been somewhat delayed, and studies published to date have been limited in terms of population size and marker densities. This study shows a thorough evaluation of the benefits that GP could bring into routine evaluations of broiler chickens, with particular attention given to the accuracy and bias of Genomic BLUP (GBLUP) predictions. The data used for these evaluations exceeds the numbers of both individuals and marker genotypes of previously published reports, with the studied population consisting of up to 23,500 individuals, genotyped for up to 600K SNPs. The evaluation of GBLUP is preceded by evaluation of the variance components using traditional restricted maximum likelihood (REML) approach sourcing information from phenotypic records and pedigree, which provide an up to date reference for the estimates of variance components. Chapter 2 tested several models exploring potential sources of genetic variation and revealed the presence of significant maternal genetic and environmental effects affecting several commercial traits. In Chapter 3, a vast dataset containing 1.3M birds spread over 24 generations was used to evaluate changes in genetic variance of juvenile body weight and hen housed production over time. The results showed a slow but steady decline of the variance. Chapter 4 provided initial estimates of the accuracy and bias of genomic predictions for several sex-limited and fitness traits, obtained for a moderately sized population of over 5K birds, genotyped with 600K Affymetrix Axiom panel from which several chips of varying marker densities were extracted. The accuracy of those predictions showed a great potential for most traits, with GBLUP performance exceeding that of traditional BLUP. Chapter 5 investigated the effect of marker choice, with two chips used: one created from GWAS hits and second from evenly spaced markers, both with constant density of 27K SNPs. The two chips were used to calculate genomic relationship matrices using Linkage Analysis and Linkage Disequilibrium approaches. Markers selected through GWAS performed better in Linkage Analysis than in Linkage Disequilibrium approach. The optimum results however were found for relationship matrices which regressed the genomic relationships back to expected pedigree-based relationships, with the best regression coefficient dependent on the chip used. Chapter 6 formed a comprehensive evaluation of the utility of GBLUP in a large broiler population, exceeding 23,500 birds genotyped using 600K Affymetrix Axiom panel. By splitting the data into variable scenarios of training and testing populations, with several lower density chips extracted from the full range of genotypes available, the effect of population size and marker density was evaluated. While the latter proved to have little effect once 20K SNPs threshold was exceeded, the effect of the population size was found to be the major limiting factor for the accuracy of EBV predictions. The discrepancy between empirical results found and theoretical expectations of accuracy based on the similar genomic and population parameters showed an underestimation of the previously proposed requirements.
APA, Harvard, Vancouver, ISO, and other styles
5

Groppe, Matthias. "Influences on aircraft target off-block time prediction accuracy." Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/7277.

Full text
Abstract:
With Airport Collaborative Decision Making (A-CDM) as a generic concept of working together of all airport partners, the main aim of this research project was to increase the understanding of the Influences on the Target Off-Block Time (TOBT) Prediction Accuracy during A-CDM. Predicting the TOBT accurately is important, because all airport partners use it as a reference time for the departure of the flights after the aircraft turn-round. Understanding such influencing factors is therefore not only required for finding measures to counteract inaccurate TOBT predictions, but also for establishing a more efficient A-CDM turn-round process. The research method chosen comprises a number of steps. Firstly, within the framework of a Cognitive Work Analysis, the sub-processes as well as the information requirements during turn-round were analysed. Secondly, a survey approach aimed at finding and describing situations during turn-round that are critical for TOBT adherence was pursued. The problems identified here were then investigated in field observations at different airlines’ operation control rooms. Based on the findings from these previous steps, small-scale human-in-the-loop experiments were designed aimed at testing hypotheses about data/information availability that influence TOBT predictability. A turn-round monitoring tool was developed for the experiments. As a result of this project, the critical chain of turn-round events and the decisions necessary during all stages of the turn-round were identified. It was concluded that information required but not shared among participants can result in TOBT inaccuracy swings. In addition, TOBT predictability was shown to depend on the location of the TOBT turn-round controller who assigns the TOBT: More reliable TOBT predictions were observed when the turn-round controller was physically present at the aircraft. During the experiments, TOBT prediction could be improved by eight minutes, if available information was cooperatively shared ten minutes prior turn-round start between air crews and turn-round controller; TOBT prediction could be improved by 15 minutes, if additional information was provided by ramp agents five minutes after turnround start.
APA, Harvard, Vancouver, ISO, and other styles
6

DeBlasio, Dan, and John Kececioglu. "Core column prediction for protein multiple sequence alignments." BIOMED CENTRAL LTD, 2017. http://hdl.handle.net/10150/623957.

Full text
Abstract:
Background: In a computed protein multiple sequence alignment, the coreness of a column is the fraction of its substitutions that are in so-called core columns of the gold-standard reference alignment of its proteins. In benchmark suites of protein reference alignments, the core columns of the reference alignment are those that can be confidently labeled as correct, usually due to all residues in the column being sufficiently close in the spatial superposition of the known three-dimensional structures of the proteins. Typically the accuracy of a protein multiple sequence alignment that has been computed for a benchmark is only measured with respect to the core columns of the reference alignment. When computing an alignment in practice, however, a reference alignment is not known, so the coreness of its columns can only be predicted. Results: We develop for the first time a predictor of column coreness for protein multiple sequence alignments. This allows us to predict which columns of a computed alignment are core, and hence better estimate the alignment's accuracy. Our approach to predicting coreness is similar to nearest-neighbor classification from machine learning, except we transform nearest-neighbor distances into a coreness prediction via a regression function, and we learn an appropriate distance function through a new optimization formulation that solves a large-scale linear programming problem. We apply our coreness predictor to parameter advising, the task of choosing parameter values for an aligner's scoring function to obtain a more accurate alignment of a specific set of sequences. We show that for this task, our predictor strongly outperforms other column-confidence estimators from the literature, and affords a substantial boost in alignment accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Salam, Patrous Ziad, and Safir Najafi. "Evaluating Prediction Accuracy for Collaborative Filtering Algorithms in Recommender Systems." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186456.

Full text
Abstract:
Recommender systems are a relatively new technology that is commonly used by e-commerce websites and streaming services among others, to predict user opinion about products. This report studies two specific recommender algorithms, namely FunkSVD, a matrix factorization algorithm and Item-based collaborative filtering, which utilizes item similarity. This study aims to compare the prediction accuracy of the algorithms when ran on a small and a large dataset. By performing cross-validation on the algorithms, this paper seeks to obtain data that supposedly may clarify ambiguities regarding the accuracy of the algorithms. The tests yielded results which indicated that the FunkSVD algorithm may be more accurate than the Item-based collaborative filtering algorithm, but further research is required to come to a concrete conclusion.
APA, Harvard, Vancouver, ISO, and other styles
8

Schellekens, Fons Jozef. "Fundamentals, accuracy and input parameters of frost heave prediction models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ26887.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Norberg, Sven. "Prediction of the fatigue limit : accuracy of post-processing methods." Licentiate thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schellekens, Fons Jozef Carleton University Dissertation Earth Sciences. "Fundamentals, accuracy and input parameters of frost heave prediction models." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chaganti, Vasanta Gayatri. "Wireless body area networks : accuracy of channel modelling and prediction." Phd thesis, Canberra, ACT : The Australian National University, 2014. http://hdl.handle.net/1885/150112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jones, Simon. "Ground vibration from underground railways : how simplifying assumptions limit prediction accuracy." Thesis, University of Cambridge, 2010. https://www.repository.cam.ac.uk/handle/1810/226848.

Full text
Abstract:
Noise and vibration from underground railways is a documented disturbance to individuals living or working near subways. Much work has been done to understand and simulate the dynamic interactions between the train, track, tunnel and soil resulting in numerical models which can predict ground-borne vibration around the tunnels and at the soil surface. However, all such numerical models rely on simplifying assumptions to make the problems trackable: soil is assumed homogenous, tunnels are assumed long and straight, the soil is assumed to be in perfect contact with the tunnel, etc. This dissertation is concerned with quantifying the uncertainty associated with some of these simplifying assumptions to provide a better estimation of the prediction accuracy when numerical models are used for 'real world' applications. The first section investigates the effect of voids at the tunnel-soil interface. The Pipe-in-Pipe model is extended to allow finite-sized voids at the interface by deriving the discrete transfer functions for the tunnel and soil from the continuous solution. The results suggest that relatively small voids can significantly affect the rms velocity predictions at higher frequencies (100-200Hz) and moderately effect predictions at lower frequencies (15-100Hz). The results are also found to be sensitive to void length and void sector angle. The second section investigates issues associated with assuming the soil is homogeneous: the effect of inclined soil layers; the effect of a subsiding soil layer; the effect of soil inhomogeneity. The thin-layer method approach is utilized as its semi-analytical formulation allows for accurate predictions with relatively short run times. The results from the three investigations suggest that slight inclination of soil layers and typical levels of soil inhomogeneity can result in significant variation in surface results compared to the homogeneous assumption. The geometric effect of a subsiding soil layer has a less significant effect on surface vibration. The findings from this study suggest that employing simplifying assumptions for the cases investigated can reasonably result in uncertainty bands of +/-5dB. Considering all the simplifying assumptions used in numerical models of ground vibration from underground railways it would not be unreasonable to conclude that the prediction accuracy for such a model may be limited to +/-10dB.
APA, Harvard, Vancouver, ISO, and other styles
13

Low, Chun Yu Danny. "Prediction of the dimensional accuracy of small extra-coronal titanium castings." University of Sydney, 1998. http://hdl.handle.net/2123/4655.

Full text
Abstract:
Master of Science in Dentistry
This work was digitised and made available on open access by the University of Sydney, Faculty of Dentistry and Sydney eScholarship . It may only be used for the purposes of research and study. Where possible, the Faculty will try to notify the author of this work. If you have any inquiries or issues regarding this work being made available please contact the Sydney eScholarship Repository Coordinator - ses@library.usyd.edu.au
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Xue. "Incorporating chromatin interaction data to improve prediction accuracy of gene expression." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/589.

Full text
Abstract:
Genome structure can be classified into three categories: primary structure, secondary structure and tertiary structure, and they are all important for gene transcription regulation. In this research, we utilize the structural information to characterize the correlations and interactions among genes, and involve such information into the Linear Mixed-Effects (LME) model to improve the accuracy of gene expression prediction. In particular, we use chromatin features as predictors and each gene is an observation. Before model training and testing, genes are grouped according to the genome structural information. We use four gene grouping methods: 1) grouping genes according to sliding windows on primary structure; 2) grouping anchor genes in chromatin loop structure; 3) grouping genes in the CTCF-anchored domain; and 4) grouping genes in the chromatin domains obtained from Hi-C experiments. We compare the prediction accuracy between LME model and linear regression model. If all chromatin feature predictors are included into the models, based on the primary structure only (Method 1), the LME models improve prediction accuracy by up to 1%. Based on the tertiary structure only (Methods 2-4), for the genes that can be grouped according the tertiary interaction data, LME models improve prediction accuracy by up to 2.1%. For individual chromatin feature predictors, the LME models improve from 2% to 26 %, in which improvement is more significant for chromatin features that have lower original predictive ability. For future research we propose a model that combines the primary and tertiary structure to infer the correlations among genes to further improve the prediction.
APA, Harvard, Vancouver, ISO, and other styles
15

Thompson, Elizabeth M. "Spelling accuracy with non-fluent aphasia wordprocessing vs. word prediction computer software /." Cincinnati, Ohio : University of Cincinnati, 2005. http://www.ohiolink.edu/etd/view.cgi?acc%5Fnum=ucin1116211390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pei, Jiantao, and n/a. "The Accuracy of Time-to-Contact Estimation in the Prediction Motion Paradigm." University of Canberra. Applied Science, 2002. http://erl.canberra.edu.au./public/adt-AUC20050627.143329.

Full text
Abstract:
This thesis is concerned with the accuracy of our estimation of time to make contact with an approaching object as measured by the “Prediction Motion” (PM) technique. The PM task has commonly been used to measure the ability to judge time to contact (TTC). In a PM task, the observer's view of the target is occluded for some period leading up to the moment of impact. The length of the occlusion period is varied and the observer signals the moment of impact by pressing a response key. The interval separating the moment of occlusion and the response is interpreted as the observer's estimate of TTC made at the moment of occlusion. This technique commonly produces large variability and systematic underestimation. The possibility that this reflects genuine perceptual errors has been discounted by most writers, since this seems inconsistent with the accuracy of interceptive actions in real life. Instead, the poor performance in the PM task has been attributed to problems with the PM technique. Several hypotheses have been proposed to explain the poor PM performance. The motion extrapolation hypothesis asserts that some form of mental representation of the occluded part of the trajectory is used to time the PM response; the errors in PM performance are attributed to errors in reconstructing the target motion. The clocking hypothesis assumes that the TTC is accurately perceived at the moment of occlusion and that errors arise in delaying the response for the required period. The fear-of-collision hypothesis proposes that the underestimation seen in the PM tasks reflects a precautionary tendency to anticipate the estimated moment of contact. This thesis explores the causes of the errors in PM measurements. Experiments 1 and 2 assessed the PM performance using a range of motion scenarios involving various patterns of movement of the target, the observer, or both. The possible contribution of clocking errors to the PM performance was assessed by a novel procedure designed to measure errors in the wait-and-respond component of the PM procedure. In both experiments, this procedure yielded a pattern of systematic underestimation and high variability similar to that in the TTC estimation task. Experiment 1 found a small effect of motion scenario on TTC estimation. However, this was not evident in Experiment 2. The collision event simulated in Experiment 2 did not involve a solid collision. The target was simply a rectangular frame marked on a tunnel wall. At the moment of “contact”, the observers passed “through” the target without collision. However, there was still systematic underestimation of TTC and there was little difference between the estimates obtained in Experiments 1 and 2. Overall, the results of Experiments 1 and 2 were seen as inconsistent with either the motion extrapolation hypothesis or the fear-of-collision hypothesis. It was concluded that observers extracted an estimate of the TTC based on optic TTC information at a point prior to the moment of collision, and used a timing process to count down to the moment of response. The PM errors were attributed to failure in this timing process. The results of these experiments were seen as implying an accurate perception of TTC. It was considered possible that in Experiments 1 and 2 observers based their TTC judgements on either the retinal size or the expansion rate of the target rather than TTC. Experiments 3 and 4 therefore investigated estimation of TTC using a range of simulated target velocities and sizes. TTC estimates were unaffected by the resulting variation in expansion rate and size, indicating that TTC, rather than retinal size or image expansion rate per se, was used to time the observers' response. The accurate TTC estimation found in Experiments 1-4 indicates that the TTC processing is very robust across a range of stimulus conditions. Experiment 5 further explored this robustness by requiring estimation of TTC with an approaching target which rotated in the frontoparallel plane. It was shown that moderate but not fast rates of target rotation induced an overestimation of TTC. However, observers were able to discriminate between TTCs for all rates of rotation. This shows that the extraction of TTC information is sensitive to perturbation of the local motion of the target border, but it implies that, in spite of these perturbations, the mechanism is flexible enough to pick up the optic TTC information provided by the looming of the retinal motion envelop of the rotating stimulus.
APA, Harvard, Vancouver, ISO, and other styles
17

Örn, Henrik. "Accuracy and precision of bedrock sur-face prediction using geophysics and geostatistics." Thesis, KTH, Mark- och vattenteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-171859.

Full text
Abstract:
In underground construction and foundation engineering uncertainties associated with subsurface properties are inevitable to deal with. Site investigations are expensive to perform, but a limited understanding of the subsurface may result in major problems; which often lead to an unexpected increase in the overall cost of the construction project. This study aims to optimize the pre-investigation program to get as much correct information out from a limited input of resources, thus making it as cost effective as possible. To optimize site investigation using soil-rock sounding three different sampling techniques, a varying number of sample points and two different interpolation methods (Inverse distance weighting and point Kriging) were tested on four modeled reference surfaces. The accuracy of rock surface predictions was evaluated using a 3D gridding and modeling computer software (Surfer 8.02®). Samples with continuously distributed data, resembling profile lines from geophysical surveys were used to evaluate how this could improve the accuracy of the prediction compared to adding additional sampling points. The study explains the correlation between the number of sampling points and the accuracy of the prediction obtained using different interpolators. Most importantly it shows how continuous data significantly improves the accuracy of the rock surface predictions and therefore concludes that geophysical measurement should be used combined with traditional soil rock sounding to optimize the pre-investigation program.
APA, Harvard, Vancouver, ISO, and other styles
18

Kalogerakos, Stamatis. "Slug initiation and prediction using high accuracy methods - applications with field data." Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/7553.

Full text
Abstract:
The sponsoring company of the project is BP. The framework within which the research is placed is that of the Transient Multiphase Flow Programme (TMF-4), a consortium of companies that are interested in phenomena related to flow of liquids and gases, in particular with relevance to oil, water and air. The deliverables agreed for the project were: • validating EMAPS through simulations of known problems and experimental and field data concerning slug flow • introducing numerical enhancements to EMAPS • decreasing computation times in EMAPS • using multi-dimensional methods to investigate slug flow The outcome of the current project has been a combination of new product development (1D multiphase code EMAPS) and a methodological innovation (use of 2D CFD for channel simulations of slugs). These are: • New computing framework composed of: – Upgraded version of 1D code EMAPS – Numerical enhancements with velocity profile coefficients – Validation with wave growth problem – Parallelisation of all models and sources in EMAPS – Testing suite for all sequential and parallel cases – Versioning control (SVN) and automatic testing upon code submission. • Use of 2D CFD VOF for channel simulation with: – Special initialisation techniques to allow transient simulations – Validation with wave growth problem – Mathematical perturbation analysis – Simulations of 92 experimental slug flow cases The cost of uptake of the above tools is relatively small compared to the benefits that are expected to follow, regarding predictions of hydrodynamic slugging. Depending on the timescales involved, it is also possible to use external consultancies in order to implement the solutions proposed, as these are software based and their uptake could be carried out in a small time-frame. Moreover it may not be necessary to build a parallel hardware infrastructure as it is now possible to have easy access to large parallel clusters and pay rates depending on use.
APA, Harvard, Vancouver, ISO, and other styles
19

Naghipour, Morteza. "The accuracy of hydrodynamic force prediction for offshore structures and Morison's equation." Thesis, Heriot-Watt University, 1996. http://hdl.handle.net/10399/738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Van, Zyl Johet Engela. "Accuracy of risk prediction tools for acute coronary syndrome : a systematic review." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97069.

Full text
Abstract:
Thesis (MCur)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Background: Coronary artery disease is a form of cardiovascular disease (CVD) which manifests itself in three ways: angina pectoris, acute coronary syndrome and cardiac death. Thirty-three people die daily of a myocardial infarction (cardiac death) and 7.5 million deaths annually are caused by CVD (51% from strokes and 45% from coronary artery disease) worldwide. Globally, the CVD death rate is a mere 4% compared to South Africa which has a 42% death rate. It is predicted that by the year 2030 there will be 25 million deaths annually from CVD, mainly in the form of strokes and heart disease. The WHO compared the death rates of high-income countries to those of low- and middle-income countries, like South Africa, and the results show that CVD deaths are declining in high-income countries but rapidly increasing in low- and middle-income countries. Although there are several risk prediction tools in use worldwide, to predict ischemic risk, South Africa does not use any of these tools. Current practice in South Africa to diagnose acute coronary syndrome is the use of a physical examination, ECG changes and positive serum cardiac maker levels. Internationally the same practice is used to diagnose acute coronary syndrome but risk assessment tools are used additionally to this practise because of limitations of the ECG and serum cardiac markers when it comes to NSTE-ACS. Objective: The aim of this study was to systematically appraise evidence on the accuracy of acute coronary syndrome risk prediction tools in adults. Methods: An extensive literature search of studies published in English was undertaken. Electronic databases searched were Cochrane Library, MEDLINE, Embase and CINAHL. Other sources were also searched, and cross-sectional studies, cohort studies and randomised controlled trials were reviewed. All articles were screened for methodological quality by two reviewers independently with the QUADAS-2 tool which is a standardised instrument. Data was extracted using an adapted Cochrane data extraction tool. Data was entered in Review Manager 5.2 software for analysis. Sensitivity and specificity was calculated for each risk score and an SROC curve was created. This curve was used to evaluate and compare the prediction accuracy of each test. Results: A total of five studies met the inclusion criteria of this review. Two HEART studies and three GRACE studies were included. In all, 9 092 patients participated in the selected studies. Estimates of sensitivity for the HEART risks score (two studies, 3268 participants) were 0,51 (95% CI 0,46 to 0,56) and 0,68 (95% CI 0,60 to 0,75); specificity for the HEART risks score was 0,90 (95% CI 0,88 to 0,91) and 0,92 (95% CI 0,90 to 0,94). Estimates of sensitivity for the GRACE risk score (three studies, 5824 participants) were 0,03 (95% CI0,01 to 0,05); 0,20 (95% CI 0,14 to 0,29) and 0,79 (95% CI 0,58 to 0,93). The specificity was 1,00 (95% CI 0,99 to 1,00); 0,97 (95% CI 0,95 to 0,98) and 0,78 (95% CI 0,73 to 0,82). On the SROC curve analysis, there was a trend for the GRACE risk score to perform better than the HEART risk score in predicting acute coronary syndrome in adults. Conclusion: Both risk scores showed that they had value in accurately predicting the presence of acute coronary syndrome in adults. The GRACE showed a positive trend towards better prediction ability than the HEART risk score.
AFRIKAANSE OPSOMMING: Agtergrond: Koronêre bloedvatsiekte is ‘n vorm van kardiovaskulêre siekte. Koronêre hartsiekte manifesteer in drie maniere: angina pectoris, akute koronêre sindroom en hartdood. Drie-en-dertig mense sterf daagliks aan ‘n miokardiale infarksie (hartdood). Daar is 7,5 miljoen sterftes jaarliks as gevolg van kardiovaskulêre siektes (51% deur beroertes en 45% as gevolg van koronêre hartsiektes) wêreldwyd. Globaal is die sterfte syfer as gevolg van koronêre vaskulêre siekte net 4% in vergelyking met Suid Afrika, wat ‘n 42% sterfte syfer het. Dit word voorspel dat teen die jaar 2030 daar 25 miljoen sterfgevalle jaarliks sal wees, meestal toegeskryf aan kardiovaskulêre siektes. Die hoof oorsaak van sterfgevalle sal toegeskryf word aan beroertes en hart siektes. Die WHO het die sterf gevalle van hoeinkoms lande vergelyk met die van lae- en middel-inkoms lande, soos Suid Afrika, en die resultate het bewys dat sterf gevalle as gevolg van kardiovaskulêre siekte is besig om te daal in hoe-inkoms lande maar dit is besig om skerp te styg in lae- en middel-inkoms lande. Daar is verskeie risiko-voorspelling instrumente wat wêreldwyd gebruik word om isgemiese risiko te voorspel, maar Suid Afrika gebruik geen van die risiko-voorspelling instrumente nie. Huidiglik word akute koronêre sindroom gediagnoseer met die gebruik van n fisiese ondersoek, EKG verandering en positiewe serum kardiale merkers. Internationaal word die selfde gebruik maar risiko-voorspelling instrumente word aditioneel by gebruik omdat daar limitasies is met EKG en serum kardiale merkers as dit by NSTE-ACS kom. Doelwit: Die doel van hierdie sisematiese literatuuroorsig was om stelselmatig die bewyse te evalueer oor die akkuraatheid van akute koronêre sindroom risiko-voorspelling instrumente vir volwassenes. Metodes: 'n Uitgebreide literatuursoektog van studies wat in Engels gepubliseer is was onderneem. Cochrane biblioteek, MEDLINE, Embase en CINAHL databases was deursoek. Ander bronne is ook deursoek. Die tiepe studies ingesluit was deurnsee-studies, kohortstudies en verewekansigde gekontroleerde studies. Alle artikels is onafhanklik vir die metodologiese kwaliteit gekeur deur twee beoordeelaars met die gebruik van die QUADAS-2 instrument, ‘n gestandaardiseerde instrument. ‘n Aangepaste Cochrane data instrument is gebruik om data te onttrek. Data is opgeneem in Review Manager 5.2 sagteware vir ontleding. Sensitiwiteit en spesifisiteit is bereken vir elke risiko instrument en ‘n SROC kurwe is geskep. Die SROC kurwe is gebruik om die akkuraatheid van voorspelling van elke instrument te evalueer en te toets. Resultate: Twee HEART studies en drie GRACE studies is ingesluit. In total was daar 9 092 patiente wat deelgeneeem het in die gekose studies. Skattings van sensitiwiteit vir die HEART risiko instrument (twee studies, 3268 deelnemers) was 0,51 (95% CI 0,47 to 0,56) en 0,68 (95% CI 0,60 to 0,75) spesifisiteit vir die HEART risiko instrument was 0,89 (95% CI 0,88 to 0,91) en 0,92 (95% CI 0,90 to 0,94). Skattings van sensitiwiteit vir die GRACE risiko instrument (drie studies, 5824 deelnemers) was 0,28 (95% CI 0,13 to 0,53); 0,20 (95% CI 0,14 to 0,29) en 0,79 (95% CI 0,58 to 0,93). Die spesifisiteit vir die GRACE risiko instrument was 0,97 (95% CI 0,95 to 0,99); 0,97 (95% CI 0,95 to 0,98) en 0,78 (95% CI 0,73 to 0,82). Met die SROC kurwe ontleding was daar ‘n tendens vir die GRACE risiko instrument om beter te vaar as die HEART risiko instrument in die voorspelling van akute koronêre sindroom in volwassenes. Gevolgtrekking: Altwee risiko instrumente toon aan dat albei instrumente van waarde is. Albei het die vermoë om die teenwoordigheid van akute koronêre sindroom in volwassenes te voorspel. Die GRACE toon ‘n positiewe tendens teenoor beter voorspelling vermoë as die HEART risiko instrument.
APA, Harvard, Vancouver, ISO, and other styles
21

Dalla, Fontana Silvia <1991&gt. "Credit risk modelling and valuation: testing credit rating accuracy in default prediction." Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/9894.

Full text
Abstract:
Credit risk is a forward-looking concept, focusing on the probability of facing credit difficulties in the future. Credit difficulties are represented by the risk of not being paid for goods or services sold to customers. This kind of risk involves all companies from financial services industry to consumer goods. Credit risk has acquired growing importance in recent years which have been characterized by a negative economic situation, started with the US subprime mortgage crisis and the collapse of Lehman Brothers in 2008. The financial crisis intervened before Basel II could become fully effective, and unveiled the fragilities of the financial system in general, but also emphasised the inadequacy of both credit risk management and the connected credit rating system carried out by ECAIs. In Chapter I, starting from an historical excursus, the study deals with credit risk methods and rating capability to predict firms’ probability of default, taking into account both quantitative and qualitative methods and the consequent credit rating assessment. In Chapter II we focus on the trade credit insurance case. Credit insurance allows companies of any size to protect against the risk of not being paid, and this consequently increases firm’s profitability thanks to higher client portfolio quality. This means that the analysis of creditworthiness includes a wide population, from SMEs to large corporates. In Chapter III we provide an empirical analysis on the accuracy of rating system: we start from dealing with the distribution of the Probability of Default and firms’ allocation in PD classes, we analyse the Gini coefficient’s adequacy in measuring rating accuracy and we deal with a multiple regression model based on financial indicators. Finally we conclude with reflections and final comments.
APA, Harvard, Vancouver, ISO, and other styles
22

Kumar, Akhil. "Budget-Related Prediction Models in the Business Environment with Special Reference to Spot Price Predictions." Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc331533/.

Full text
Abstract:
The purpose of this research is to study and improve decision accuracy in the real world. Spot price prediction of petroleum products, in a budgeting context, is the task chosen to study prediction accuracy. Prediction accuracy of executives in a multinational oil company is examined. The Brunswik Lens Model framework is used to evaluate prediction accuracy. Predictions of the individuals, the composite group (mathematical average of the individuals), the interacting group, and the environmental model were compared. Predictions of the individuals were obtained through a laboratory experiment in which experts were used as subjects. The subjects were required to make spot price predictions for two petroleum products. Eight predictor variables that were actually used by the subjects in real-world predictions were elicited through an interview process. Data for a 15 month period were used to construct 31 cases for each of the two products. Prediction accuracy was evaluated by comparing predictions with the actual spot prices. Predictions of the composite group were obtained by averaging the predictions of the individuals. Interacting group predictions were obtained ex post from the company's records. The study found the interacting group to be the least accurate. The implication of this finding is that even though an interacting group may be desirable for information synthesis, evaluation, or working toward group consensus, it is undesirable if prediction accuracy is critical. The accuracy of the environmental model was found to be the highest. This suggests that apart from random error, misweighting of cues by individuals and groups affects prediction accuracy. Another implication of this study is that the environmental model can also be used as an additional input in the prediction process to improve accuracy.
APA, Harvard, Vancouver, ISO, and other styles
23

THOMPSON, ELIZABETH M. "SPELLING ACCURACY WITH NON-FLUENT APHASIA: WORD PROCESSING V.S. WORD PREDICTION COMPUTER SOFTWARE." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1116211390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Parson, Lindsay. "Improving the Accuracy of the VO2 max Prediction Obtained from Submaxial YMCA Testing." TopSCHOLAR®, 2004. http://digitalcommons.wku.edu/theses/510.

Full text
Abstract:
Maximal oxygen uptake (VO2 max) is the best criterion measure for aerobic fitness and the prescription of exercise intensity for programs designed to enhance cardiorespiratory fitness. There are two ways of obtaining VO2 max: maximal tests, which require subjects to exercise to the point of volitional exhaustion and provide the most accurate measure; and submaximal tests, which are less physically strenuous but have lower accuracy. A popular submaximal protocol is the YMCA bike test. Steady state heart rate (HR) is measured at multiple submaximal workloads and extrapolated to the subject's estimated maximal HR (220-age). The VO2 corresponding to the estimated maximal HR is accepted as the estimated VO2 max. The accuracy of this submaximal testing protocol effects the ability to estimate a subject's actual aerobic capacity. To help better investigate the YMCA protocol, submaximal measures (HR, VO2) were utilized at specific workloads in an attempt to improve the accuracy of the prediction. The standard YMCA protocol was completed and then extended to actual maximal exertion. Submaximal measures (HR, VO2, etc.) were used to develop a regression equation predicting VO2 max. T-tests were used to compare VO2 data between protocols. Multiple regression analyses were performed to generate regression equations to enhance the accuracy of VO2 max estimations from the YMCA submaximal protocol. Results were considered to have no significant difference in the new regression equation and the actual measured VO2 max. Because submaximal measures (HR, VO2) could not be utilized to improve the accuracy of the prediction of the YMCA protocol, the original purpose was deemphasized and redirected. Considering the apparent utility of anthropometric measures in estimating VO2 max, this study sought to improve the accuracy of the YMCA protocol by adding anthropometric measures (BMI, Skinfolds) to develop two separate regression models. Results were significantly different (p < 0.05) between Measured V02 (MV02) and V02 estimated from the YMCA protocol (YV02). Additionally, results were significantly different (p = 0.003) between the Houston nonexercise test and MVO2. In conclusion, although a significant correlation resulted between MVO2 and YVO2, it was not stronger than other submaximal estimations. Also, it was not a strong predictor because of a significant difference between the Houston nonexercise test and MVO2. Therefore, by adding BMI and Skinfolds to the popular YMCA formula, r-values were increased (r = 0.817 and r = 0.822) and can therefore better estimate a subject's VO2 max versus solely using graphic plots of steady state HR responses at protocol-determined workloads.
APA, Harvard, Vancouver, ISO, and other styles
25

Badenhorst, Dirk Jakobus Pretorius. "Improving the accuracy of prediction using singular spectrum analysis by incorporating internet activity." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80056.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Researchers and investors have been attempting to predict stock market activity for years. The possible financial gain that accurate predictions would offer lit a flame of greed and drive that would inspire all kinds of researchers. However, after many of these researchers have failed, they started to hypothesize that a goal such as this is not only improbable, but impossible. Previous predictions were based on historical data of the stock market activity itself and would often incorporate different types of auxiliary data. This auxiliary data ranged as far as imagination allowed in an attempt to find some correlation and some insight into the future, that could in turn lead to the figurative pot of gold. More often than not, the auxiliary data would not prove helpful. However, with the birth of the internet, endless amounts of new sources of auxiliary data presented itself. In this thesis I propose that the near in finite amount of data available on the internet could provide us with information that would improve stock market predictions. With this goal in mind, the different sources of information available on the internet are considered. Previous studies on similar topics presented possible ways in which we can measure internet activity, which might relate to stock market activity. These studies also gave some insights on the advantages and disadvantages of using some of these sources. These considerations are investigated in this thesis. Since a lot of this work is therefore based on the prediction of a time series, it was necessary to choose a prediction algorithm. Previously used linear methods seemed too simple for prediction of stock market activity and a new non-linear method, called Singular Spectrum Analysis, is therefore considered. A detailed study of this algorithm is done to ensure that it is an appropriate prediction methodology to use. Furthermore, since we will be including auxiliary information, multivariate extensions of this algorithm are considered as well. Some of the inaccuracies and inadequacies of these current multivariate extensions are studied and an alternative multivariate technique is proposed and tested. This alternative approach addresses the inadequacies of existing methods. With the appropriate methodology chosen and the appropriate sources of auxiliary information chosen, a concluding chapter is done on whether predictions that includes auxiliary information (obtained from the internet) improve on baseline predictions that are simply based on historical stock market data.
AFRIKAANSE OPSOMMING: Navorsers en beleggers is vir jare al opsoek na maniere om aandeelpryse meer akkuraat te voorspel. Die moontlike finansiële implikasies wat akkurate vooruitskattings kan inhou het 'n vlam van geldgierigheid en dryf wakker gemaak binne navorsers regoor die wêreld. Nadat baie van hierdie navorsers onsuksesvol was, het hulle begin vermoed dat so 'n doel nie net onwaarskynlik is nie, maar onmoontlik. Vorige vooruitskattings was bloot gebaseer op historiese aandeelprys data en sou soms verskillende tipes bykomende data inkorporeer. Die tipes data wat gebruik was het gestrek so ver soos wat die verbeelding toegelaat het, in 'n poging om korrelasie en inligting oor die toekoms te kry wat na die guurlike pot goud sou lei. Navorsers het gereeld gevind dat hierdie verskillende tipes bykomende inligting nie van veel hulp was nie, maar met die geboorte van die internet het 'n oneindige hoeveelheid nuwe bronne van bykomende inligting bekombaar geraak. In hierdie tesis stel ek dus voor dat die data beskikbaar op die internet dalk vir ons kan inligting gee wat verwant is aan toekomstige aandeelpryse. Met hierdie doel in die oog, is die verskillende bronne van inligting op die internet gebestudeer. Vorige studies op verwante werk het sekere spesifieke maniere voorgestel waarop ons internet aktiwiteit kan meet. Hierdie studies het ook insig gegee oor die voordele en die nadele wat sommige bronne inhou. Hierdie oorwegings word ook in hierdie tesis bespreek. Aangesien 'n groot gedeelte van hierdie tesis dus gebasseer word op die vooruitskatting van 'n tydreeks, is dit nodig om 'n toepaslike vooruitskattings algoritme te kies. Baie navorsers het verkies om eenvoudige lineêre metodes te gebruik. Hierdie metodes het egter te eenvoudig voorgekom en 'n relatiewe nuwe nie-lineêre metode (met die naam "Singular Spectrum Analysis") is oorweeg. 'n Deeglike studie van hierdie algoritme is gedoen om te verseker dat die metode van toepassing is op aandeelprys data. Verder, aangesien ons gebruik wou maak van bykomende inligting, is daar ook 'n studie gedoen op huidige multivariaat uitbreidings van hierdie algoritme en die probleme wat dit inhou. 'n Alternatiewe multivariaat metode is toe voorgestel en getoets wat hierdie probleme aanspreek. Met 'n gekose vooruitskattingsmetode en gekose bronne van bykomende data is 'n gevolgtrekkende hoofstuk geskryf oor of vooruitskattings, wat die bykomende internet data inkorporeer, werklik in staat is om te verbeter op die eenvoudige vooruitskattings, wat slegs gebaseer is op die historiese aandeelprys data.
APA, Harvard, Vancouver, ISO, and other styles
26

Sowan, Bilal I. "Enhancing Fuzzy Associative Rule Mining Approaches for Improving Prediction Accuracy. Integration of Fuzzy Clustering, Apriori and Multiple Support Approaches to Develop an Associative Classification Rule Base." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5387.

Full text
Abstract:
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
Applied Science University (ASU) of Jordan
APA, Harvard, Vancouver, ISO, and other styles
27

Sowan, Bilal Ibrahim. "Enhancing fuzzy associative rule mining approaches for improving prediction accuracy : integration of fuzzy clustering, apriori and multiple support approaches to develop an associative classification rule base." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5387.

Full text
Abstract:
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
APA, Harvard, Vancouver, ISO, and other styles
28

Trundle, Jennifer. "Manipulating the placement of error: the effect of prediction of accuracy in function learning /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19217.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gatsiou, Christina-Anna. "Improving the accuracy of lattice energy calculations in crystal structure prediction using experimental data." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/34685.

Full text
Abstract:
Crystal structure prediction (CSP) has been a problem of great industrial interest but also a fundamental challenge in condensed matter science. The problem involves the identification of the stable and metastable crystals of a given compound at certain temperature and pressure conditions. Computational CSP methods based on the lattice energy minimization have been successful in identifying experimentally observed crystals of an organic compound as local minima of the lattice energy landscape but not always with the correct relative stability. This is primarily controlled by the lattice energy model. The lattice energy model adopted in this work is based on the assumption that molecules are rigid, electrostatic interactions are modelled via distributed multipoles derived from the ab initio charge density of the gas phase conformation and an empirical pairwise exp-6 potential for the repulsion dispersion interactions. Based on the fact that the reliability of all computational models is based on their agreement with experimental evidence, the use of available experimental data for improving the lattice energy model is the main focus of this work. First the impact of different modelling choices -- choice of level of theory for electrostatics and parameters for the repulsion-dispersion term -- in the modelling of experimental structures, energies and relative stabilities is investigated. Results suggest that a reestimation of the repulsion-dispersion parameters is expected to produce parameters consistent with changes in the other lattice energy terms and bring the model closer to experiment, consequently improving predictions. An algorithm, CrystalEstimator, for fitting the exp-6 potential parameters by minimizing the sum of squared deviations between experimental structures and energies and the corresponding relaxed structures and energies is developed. The lattice energy of the experimental structures is minimized by the program DMACRYS. The solution algorithm is based on the search of the parameter space using deterministic low discrepancy sequences; and the use of an efficient local minimization algorithm. The proposed method is applied to derive transferable exp-6 potential parameters for hydrocarbons, organosulphur compounds, azahydrocarbons, oxohydrocarbons and organosulphur compounds containing nitrogen. Three different sets of parameters are developed, suitable for use in conjunction with three different models of electrostatics derived at the HF/6-31G(d,p), M06/6-31G(d,p) and MP2/6-31g(d,p) levels. A good fit is achieved for all the new sets of parameters with a mean absolute error in sublimation enthalpies less than 3.5 kJ/mol and an average rmsd15 less than 0.35 Å. Prediction studies are performed for acetylene, tetracyanoethylene and blind test molecule XXII and the generated lattice energy landscapes are refined with the new models. The observed experimental structures are predicted with better structural agreement but the same or higher ranking than those obtained by the previously used FIT parameter set.
APA, Harvard, Vancouver, ISO, and other styles
30

Yasheen, Sharifa. "Evaluation of Markov Models in Location Based Social Networks in Terms of Prediction Accuracy." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-13039.

Full text
Abstract:
Location Based Social Networks has attracted millions of mobile internet users. On their smart phones people can share their locations using social network services. The main purpose of check-ins is to provide other users’ information about places they visit. Location Based Social Network with thousands of check-ins allows users to learn social behavior through spatial-temporal effect, which provides different services such as place recommendation and traffic prediction. Through this information, we can have an idea about important locations in the city and human mobility. The main purpose of this thesis is to evaluate Markov Models in Location Based Social Networks in terms of prediction accuracy. Location Based Social Network features and basic information’s will be analyzed before modeling of human mobility. Afterwards with the use of three methods human mobility will be modeled. In all the models the check-ins are analyzed based on prior category. After estimation the user’s possible next check-in category, and according to the user’s check-ins in the following category, it predicts the next possible check-in location. Finally a comparison will be made considering the models prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Yaoman, and 李耀满. "Efficient methods for improving the sensitivity and accuracy of RNA alignments and structure prediction." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/195977.

Full text
Abstract:
RNA plays an important role in molecular biology. RNA sequence comparison is an important method to analysis the gene expression. Since aligning RNA reads needs to handle gaps, mutations, poly-A tails, etc. It is much more difficult than aligning other sequences. In this thesis, we study the RNA-Seq align tools, the existing gene information database and how to improve the accuracy of alignment and predict RNA secondary structure. The known gene information database contains a lot of reliable gene information that has been discovered. And we note most DNA align tools are well developed. They can run much faster than existing RNA-Seq align tools and have higher sensitivity and accuracy. Combining with the known gene information database, we present a method to align RNA-Seq data by using DNA align tools. I.e. we use the DNA align tools to do alignment and use the gene information to convert the alignment to genome based. The gene information database, though updated daily, there are still a lot of genes and alternative splicings that hadn't been discovered. If our RNA align tool only relies on the known gene database, then there may be a lot reads that come from unknown gene or alternative splicing cannot be aligned. Thus, we show a combinational method that can cover potential alternative splicing junction sites. Combining with the original gene database, the new align tools can cover most alignments which are reported by other RNA-Seq align tools. Recently a lot of RNA-Seq align tools have been developed. They are more powerful and faster than the old generation tools. However, the RNA read alignment is much more complicated than other sequence alignment. The alignments reported by some RNA-Seq align tools have low accuracy. We present a simple and efficient filter method based on the quality score of the reads. It can filter most low accuracy alignments. At last, we present a RNA secondary prediction method that can predict pseudoknot(a type of RNA secondary structure) with high sensitivity and specificity.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
32

Christensen, Nikolaj Kruse, Ty Paul A. Ferre, Gianluca Fiandaca, and Steen Christensen. "Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy." COPERNICUS GESELLSCHAFT MBH, 2017. http://hdl.handle.net/10150/623198.

Full text
Abstract:
We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model.

The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation.

As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.
APA, Harvard, Vancouver, ISO, and other styles
33

Evans, Stephanie Ann. "Gender disparity in the prediction of recidivism the accuracy of the LSI-R modified /." Thesis, [Tuscaloosa, Ala. : University of Alabama Libraries], 2009. http://purl.lib.ua.edu/23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Davuluri, Pavani. "Prediction of Breathing Patterns Using Neural Networks." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/718.

Full text
Abstract:
During the radio therapy treatment, it has been difficult to synchronize the radiation beam with the tumor position. Many compensation techniques have been used before. But all these techniques have some system latency, up to a few hundred milliseconds. Hence it is necessary to predict tumor position to compensate for the control system latency. In recent years, many attempts have been made to predict the position of a moving tumor during respiration. Analyzing external breathing signals presents a methodology in predicting the tumor position. Breathing patterns vary from very regular to irregular patterns. The irregular breathing patterns make prediction difficult. A solution is presented in this paper which utilizes neural networks as the predictive filter to determine the tumor position up to 500 milliseconds in the future. Two different neural network architectures, feedforward backpropagation network and recurrent network, are used for prediction. These networks are initialized in the same manner for the comparison of their prediction accuracies. The networks are able to predict well for all the 5 breathing cases used in the research and the results of both the networks are acceptable and comparable. Furthermore, the network parameters are optimized using a genetic algorithm to improve the performance. The optimization results obtained proved to improve the accuracy of the networks. The results of both the networks showed that the networks are good for prediction of different breathing behaviors.
APA, Harvard, Vancouver, ISO, and other styles
35

Ahmed, War, and Mehrdad Bahador. "The accuracy of the LSTM model for predicting the S&P 500 index and the difference between prediction and backtesting." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229415.

Full text
Abstract:
In this paper the question of the accuracy of the LSTM algorithm for predicting stock prices is being researched. The LSTM algorithm is a form of deep learning algorithm. The algorithm takes in a set of data as inputs and finds a pattern to dissolve an output. Our results point to that using backtesting as the sole method to verify the accuracy of a model can fallible. For the future, researchers should take a fresh approach by using real-time testing. We completed this by letting the algorithm make predictions on future data. For the accuracy of the model we reached the conclusion that having more parameters improves accuracy.
I detta arbete forskas det kring hur bra prognoser man kan ge genom att använda sig av LSTM algoritmen för att förutspå aktiekurser. LSTM-algoritmen är en form av djupinlärnigsmetod, där man ger algoritmen en del typer av data som input och hittar ett mönster i datan vilket ger ett resultat. I vårt resultat kom vi fram till man ej ska förlita sig på backtesting för att verifiera sina resultat utan även använda modellen till att göra prognoser på framtida data. Vi kan även tillägga att tillförlitlighet ökar om man använder sig av flera faktorer i modellen.
APA, Harvard, Vancouver, ISO, and other styles
36

Holub, Michal. "VLIV GEOMETRICKÉ PŘESNOSTI VYBRANÝCH OBRÁBĚCÍCH CENTER NA POŽADOVANÉ VLASTNOSTI VÝROBKŮ." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-233989.

Full text
Abstract:
The main subject of this doctoral thesis is the influence of the geometrical accuracy of large CNC machine tools on desired features of produced work pieces. Doe to globalized market environment and competition producers of machine tools have changed their strategy for delivery of their products to customers. The main issue is not only to deliver a machine tool as such; supporting instructions related to the technology of the cutting process on the machine tool are of great importance. When taking delivery, the customer can see a new machine tool that will produce by him specified work piece with a desired accuracy. In the proposed thesis, a development of a novel methodology of measuring vertical lathes for prediction of chosen geometrical parameters of work pieces is introduced. The main goal of this work has been to determine the influence of the geometrical accuracy of selected design groups of a vertical lath on the future geometric accuracy of the work piece. The proposed methodology has been developed and verified on a selected vertical lath SKIQ30 produced by TOSHULIN, a.s. For identification of chosen parameters of the vertical lath a measuring system using latest measuring technologies has been applied. The basic tool for measured data processing has been a set of statistic methods for prediction of behaviour of measured design groups of the machine. The foundation for statistical processing has been calculation of geometrical deviations obtained from algorithms designed for proposed measurement methodology. The proposed measurement methodology for vertical lathes has been divided into two parts. In the first part, the methodology of measurement and evaluation of linear axes is solved, where a measuring system Laser Track has been used. The employment of the system Laser Track turned out to be very suitable. Conclusions related to the accuracy of the measuring device have been drawn in the thesis. The second part of the proposed methodology is represented by observation and description of the rotating disk, where non-contact position transducers have been used. In the course of the doctoral dissertation it has been observed that the studied (with respect to the geometry) behaviour of the machine is significantly affected by the cutting conditions. To these belong the loading of the rotating disc by the mass of the work piece, angular velocity of the rotating disc and the operating time of the machine. Based on these observations it can be stated that for prediction of work piece features it is essential to know the behaviour of the machine tool in the whole range of the operating speeds and loading of the rotating disc. A part of the proposed methodology for measuring vertical lathes seems to be very suitable for a design of a diagnostic system that could be applied on large rotating disc. Furthermore, it is recommended to extend the doctoral thesis in order to develop a unit for compensation of geometrical errors on rotating discs of vertical lathes.
APA, Harvard, Vancouver, ISO, and other styles
37

Michalíček, Michal. "PREDIKCE PRACOVNÍ PŘESNOSTI CNC OBRÁBĚCÍCH STROJŮ." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-234159.

Full text
Abstract:
The main research topic of this dissertation thesis deals with the influence of the working accuracy related to large CNC machines on the required workpiece properties. The state-of-the art has constantly been developed and the customer’s requirements directed to manufacturers of the machine tools are increasingly demanding. There is required a high accuracy, reliability, shortening delivery times etc. The manufacturer is expected to fulfill all the customer’s requirements. The machining accuracy is dependent, among others, on the positioning accuracy of a machine tool that represents the position of the cutting tool relative to a workpiece. The accuracy of a particular machine tool is therefore a limiting factor concerned with reaching the highest accuracy and quality of a workpiece. Acceptance test procedures are initially realized within a manufacturing shop floor and then within a customer environment. The main subject of the testing procedures is to verify. basic properties of a machine tool such as basic dimensions of a machine, strokes in all respective coordinates, machine travels and spindle speeds. A machining of a workpiece with a desired precision specified by the customer is also performed. In this thesis, a new methodology to calculate and determine errors concerned with machining on vertical lathes is proposed with the aim of predicting the machining accuracy of CNC machine tools. The objective of this work was to determine the influence of machining accuracy of a CNC machine in connection with a subsequent geometrical accuracy of a workpiece. Considered position of a tool relative to a workpiece will be affected by errors concerned with a kinematical inaccuracy, geometrical errors and errors related to machining forces. A vertical lathe SKIQ 30 produced by TOSHULIN, a.s. has been used as a testing machine on which the proposed methodology has been verified. During processing various calculations in this thesis there has been found out that the machining inaccuracy of the machine – main topic of interest – is influenced by the machining conditions such as cutting conditions, material to be machined and even the setting of working positions of the machine. For instance, a position of the cross slide is determined by a workpiece height and a position of the lathe carriage that is determined by a workpiece diameter and a travel of the slide that is also determined by a workpiece height. It can be stated then that to predict the machining accuracy of work pieces, it is important to know the behavior of the machine in its all range of working envelope. Based on the information acquired about the behavior of the machine, it can be determined an appropriate position for a workpiece setting that allows utilizing the maximum reachable accuracy with respect to the stiffness and geometrical accuracy of the machine tool. An appropriate setting of initial working position may influence the subsequent geometrical accuracy of a workpiece.
APA, Harvard, Vancouver, ISO, and other styles
38

Freitas, Kimberly M. "Improving accuracy of acoustic prediction in the Philippine Sea through incorporation of mesoscale environmental effects." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA483645.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, June 2008.
Thesis Advisor(s): Colosi, John A. "June 2008." Description based on title screen as viewed on August 22, 2008. Includes bibliographical references (p. 49-50). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
39

Wålinder, Andreas. "Evaluation of logistic regression and random forest classification based on prediction accuracy and metadata analysis." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35126.

Full text
Abstract:
Model selection is an important part of classification. In this thesis we study the two classification models logistic regression and random forest. They are compared and evaluated based on prediction accuracy and metadata analysis. The models were trained on 25 diverse datasets. We calculated the prediction accuracy of both models using RapidMiner. We also collected metadata for the datasets concerning number of observations, number of predictor variables and number of classes in the response variable.     There is a correlation between performance of logistic regression and random forest with significant correlation of 0.60 and confidence interval [0.29 0.79]. The models appear to perform similarly across the datasets with performance more influenced by choice of dataset rather than model selection.     Random forest with an average prediction accuracy of 81.66% performed better on these datasets than logistic regression with an average prediction accuracy of 73.07%. The difference is however not statistically significant with a p-value of 0.088 for Student's t-test.     Multiple linear regression analysis reveals none of the analysed metadata have a significant linear relationship with logistic regression performance. The regression of logistic regression performance on metadata has a p-value of 0.66. We get similar results with random forest performance. The regression of random forest performance on metadata has a p-value of 0.89. None of the analysed metadata have a significant linear relationship with random forest performance.     We conclude that the prediction accuracies of logistic regression and random forest are correlated. Random forest performed slightly better on the studied datasets but the difference is not statistically significant. The studied metadata does not appear to have a significant effect on prediction accuracy of either model.
APA, Harvard, Vancouver, ISO, and other styles
40

Segerstedt, Gustav, and Theodor Uhmeier. "How accuracy of time-series prediction for cryptocurrency pricing is affected by the sampling period." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232141.

Full text
Abstract:
Cryptocurrencies and their pertaining market currently succeeds $300billion and has had an all time high surpassing $850 billion. Being able to predict market movements and future valuations for a cryptocurrency would be an invaluable and very profitable tool for designing successful investment strategies. This thesis compares how time series predictions on the cryptocurrency Ether using a long short-term memory (LSTM) neural network is affected by altering the sampling period. Specifically we look at how the sampling periods of 30 minutes, 2 hours and 4 hours affect a prediction horizon of 4 hours. The results are also verified across a varying number of neurons (10, 20 and 40) for each of the two LSTM layers of the model. The results indicate that the accuracy of predictions can be improved by decreasing the sampling period of data. However there does not seem to be any clear trend how changing the number of neurons per LSTM layer affect prediction accuracy.
Kryptovalutor och deras tillhörande marknad överstiger idag $300 miljarder i marknadsvärde och har tidigare som högst nått $850 miljarder. Att kunna förutspå en valutas värde skulle vara ett mycket viktigt verktyg i en lönsam investeringstrategi. I denna uppstats jämförs hur precisionen för prediktering för en specifik tidshorisont påverkas i ett neuralt nätverk utav samplingsperioden av datan. Mer precist utförs detta med en long short-term memory (LSTM) modell på kryptovalutan Ether. Specifikt undersöks hur en LSTM modells prediktering fyra timmar framåt påverkas genom att ändra samplingsperioden från 4 timmar till 2 timmar och sist 30 minuter. Resultaten valideras även för ett varierande antal neuroner per nätverkslager (10, 20 och 40). Resultaten av denna undersökning indikerar att prediktionernasprecision kan förbättras genom att minska samplingsperioden av datasetet. Däremot syns ingen tydligen trend när antalet neuroner per nätverkslager ändras.
APA, Harvard, Vancouver, ISO, and other styles
41

Yonge, Katherine Chandler. "Criminal profile accuracy following training in inductive and deductive approaches." Master's thesis, Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-03312008-194642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Agarwalla, Yashika. "Prediction of land cover in continental United States using machine learning techniques." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53613.

Full text
Abstract:
Land cover is a reliable source for studying changes in the land use patterns at a large scale. With advent of satellite images and remote sensing technologies, land cover classification has become easier and more reliable. In contrast to the conventional land cover classification methods that make use of land and aerial photography, this research uses small scale Digital Elevation Maps and it’s corresponding land cover image obtained from Google Earth Engine. Two machine learning techniques, Boosted Regression Trees and Image Analogy, have been used for classification of land cover regions in continental United States. The topographical features selected for this study include slope, aspect, elevation and topographical index (TI). We assess the efficiency of machine learning techniques in land cover classification using satellite data to establish the topographic-land cover relation. The thesis establishes the topographic-land cover relation, which is crucial for conservation planning, and habitat or species management. The main contribution of the research is its demonstration of the dominance of various topographical attributes and the ability of the techniques used to predict land cover over large regions and to reproduce land cover maps in high resolution. In comparison to traditional remote sensing methods such as, aerial photography, to develop land cover maps, both the methods presented are inexpensive, faster. The need for this research is in synergy with past studies, which show that large-scale data, processing, along with integration and interpretation make automated and accurate methods of change in land cover mapping highly desirable.
APA, Harvard, Vancouver, ISO, and other styles
43

Burger, Sarah Beth. "My Spider-Sense Needs Calibrating: Anticipated Reactions to Spider Stimuli Poorly Predict Initial Responding." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222891.

Full text
Abstract:
The present study attempted to answer two general questions: (1) what is the relation between expected and actual reactions to a spider in individuals afraid of spiders? and (2) are inaccurate expectancies updated on the basis of experience? Behavioral and cognitive-behavioral learning models of fear, treatment protocols developed in relation to these, and recent findings from our laboratory necessitated answers to two additional questions: (3) does the expectation accuracy of individuals who meet DSM-IV criteria for diagnosis with a specific phobia differ from that of individuals who are fearful but do not meet criteria? and (4) does expectation accuracy vary as a function of context? Two final questions were obvious: (5) do the actual reactions of individuals who meet criteria for diagnosis differ predictably from those of fearful individuals? and (6) do reactions vary contextually? Student participants reported and tested a series of trial-specific expectancies about their reactions to a live, mechanical, or virtual tarantula over seven trials. Participants then completed three final trials in the presence of a live tarantula. Participants poorly anticipated the quality and intensity of their initial reactions, but expectation accuracy increased quickly. No clear tendencies for over- or under-prediction emerged. Participants updated expectancies in relation to prior trial expectation accuracy, either increasing or decreasing their predicted reactions relative to the original expectancy. Participants who met criteria for diagnosis with a specific phobia consistently anticipated and reported more intense reactions than did those who were fearful, but diagnostic status was not predictive of expectation accuracy. Participants in the live and virtual spider groups reported similar levels of fear that were greater than those in the mechanical spider group. Participants in the virtual spider group more readily reduced the distance maintained between themselves and the spider stimulus than did those in the live or mechanical spider groups. Expectation accuracy did not vary contextually. Results are discussed in light of the theoretical models presented, with findings lending greater support to behavioral models of fear learning relative to cognitive models that postulate a substantial role for conscious processing and appraisal in specific fear. Practical recommendations are made to researchers and clinicians based on present findings.
APA, Harvard, Vancouver, ISO, and other styles
44

Dillon, Joshua V. "Stochastic m-estimators: controlling accuracy-cost tradeoffs in machine learning." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42913.

Full text
Abstract:
m-Estimation represents a broad class of estimators, including least-squares and maximum likelihood, and is a widely used tool for statistical inference. Its successful application however, often requires negotiating physical resources for desired levels of accuracy. These limiting factors, which we abstractly refer as costs, may be computational, such as time-limited cluster access for parameter learning, or they may be financial, such as purchasing human-labeled training data under a fixed budget. This thesis explores these accuracy- cost tradeoffs by proposing a family of estimators that maximizes a stochastic variation of the traditional m-estimator. Such "stochastic m-estimators" (SMEs) are constructed by stitching together different m-estimators, at random. Each such instantiation resolves the accuracy-cost tradeoff differently, and taken together they span a continuous spectrum of accuracy-cost tradeoff resolutions. We prove the consistency of the estimators and provide formulas for their asymptotic variance and statistical robustness. We also assess their cost for two concerns typical to machine learning: computational complexity and labeling expense. For the sake of concreteness, we discuss experimental results in the context of a variety of discriminative and generative Markov random fields, including Boltzmann machines, conditional random fields, model mixtures, etc. The theoretical and experimental studies demonstrate the effectiveness of the estimators when computational resources are insufficient or when obtaining additional labeled samples is necessary. We also demonstrate that in some cases the stochastic m-estimator is associated with robustness thereby increasing its statistical accuracy and representing a win-win.
APA, Harvard, Vancouver, ISO, and other styles
45

Graefe, Andreas [Verfasser], and C. [Akademischer Betreuer] Weinhardt. "Prediction Markets versus Alternative Methods : Empirical Tests of Accuracy and Acceptability / Andreas Graefe. Betreuer: C. Weinhardt." Karlsruhe : KIT-Bibliothek, 2009. http://d-nb.info/1014220912/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

König, Immanuel [Verfasser]. "An algorithmic approach to increase the context prediction accuracy by utilizing multiple context sources / Immanuel König." Kassel : Universitätsbibliothek Kassel, 2017. http://d-nb.info/1155326016/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Schopp, Pascal [Verfasser], and Albrecht E. [Akademischer Betreuer] Melchinger. "Factors influencing the accuracy of genomic prediction in plant breeding / Pascal Schopp ; Betreuer: Albrecht E. Melchinger." Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2019. http://d-nb.info/1176624997/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Santos, William O. "An analysis of the prediction accuracy of the U.S. Navy repair turn-around time forecast model." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FSantos.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2003.
Thesis advisor(s): Robert A. Koyak, Samuel E. Buttrey. Includes bibliographical references (p. 55). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
49

Schopp, Pascal Verfasser], and Albrecht E. [Akademischer Betreuer] [Melchinger. "Factors influencing the accuracy of genomic prediction in plant breeding / Pascal Schopp ; Betreuer: Albrecht E. Melchinger." Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2019. http://d-nb.info/1176624997/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhai, Yuzheng. "Improving scalability and accuracy of text mining in grid environment." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/5927.

Full text
Abstract:
The advance in technologies such as massive storage devices and high speed internet has led to an enormous increase in the volume of available documents in electronic form. These documents represent information in a complex and rich manner that cannot be analysed using conventional statistical data mining methods. Consequently, text mining is developed as a growing new technology for discovering knowledge from textual data and managing textual information. Processing and analysing textual information can potentially obtain valuable and important information, yet these tasks also requires enormous amount of computational resources due to the sheer size of the data available. Therefore, it is important to enhance the existing methodologies to achieve better scalability, efficiency and accuracy.
The emerging Grid technology shows promising results in solving the problem of scalability by splitting the works from text clustering algorithms into a number of jobs, each to be executed separately and simultaneously on different computing resources. That allows for a substantial decrease in the processing time and maintaining the similar level of quality at the same time.
To improve the quality of the text clustering results, a new document encoding method is introduced that takes into consideration of the semantic similarities of the words. In this way, documents that are similar in content will be more likely to be group together.
One of the ultimate goals of text mining is to help us to gain insights to the problem and to assist in the decision making process together with other source of information. Hence we tested the effectiveness of incorporating text mining method in the context of stock market prediction. This is achieved by integrating the outcomes obtained from text mining with the ones from data mining, which results in a more accurate forecast than using any single method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography