Dissertations / Theses on the topic 'Computer errors'

To see the other types of publications on this topic, follow the link: Computer errors.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer errors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bootkrajang, Jakramate. "Supervised learning with random labelling errors." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4487/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Classical supervised learning from a training set of labelled examples assumes that the labels are correct. But in reality labelling errors may originate, for example, from human mistakes, diverging human opinions, or errors of the measuring instruments. In such cases the training set is misleading and in consequence the learning may suffer. In this thesis we consider probabilistic modelling of random label noise. The goal of this research is two-fold. First, to develop new improved algorithms and architectures from a principled footing which are able to detect and bypass the unwanted effects of mislabelling. Second, to study the performance of such methods both empirically and theoretically. We build upon two classical probabilistic classifiers, the normal discriminant analysis and the logistic regression and introduce the label-noise robust versions of these classifiers. We also develop useful extensions such as a sparse extension and a kernel extension in order to broaden applicability of the robust classifiers. Finally, we devise an ensemble of the robust classifiers in order to understand how the robust models perform collectively. Theoretical and empirical analysis of the proposed models show that the new robust models are superior to the traditional approaches in terms of parameter estimation and classification performance.
2

Afifi, Faten Helmy. "Detecting errors in nonlinear functions for computer software." Case Western Reserve University School of Graduate Studies / OhioLINK, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=case1060013182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Shijing. "Systematic analysis and modelling of diagnostic errors in medicine." Thesis, City University London, 2016. http://openaccess.city.ac.uk/15125/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diagnostic accuracy is an important index of the quality of health care service. Missed, wrong or delayed diagnosis has a direct effect on patient safety. Diagnostic errors have been discussed at length; however it still lacks a systemic research approach. This thesis takes the diagnostic process as a system and develops a systemic model of diagnostic errors by implementing system dynamics modelling combined with regression analysis. It aims to propose a better way of studying diagnostic errors as well as a deeper understanding of how factors affect the number of possible errors at each step of the diagnostic process and how factors contribute to patient outcomes in the end. It is executed following two parts: In the first part, a qualitative model is developed to demonstrate how errors can happen during the diagnostic process; in other words, the model illustrates the connections among key factors and dependent variables. It starts from discovering key factors of diagnostic errors, producing a hierarchical list of factors, and then illustrates interrelation loops that show how relevant factors are linked with errors. The qualitative model is based on the findings of a systematic literature review and further refined by experts’ reviews. In the second part, a quantitative model is developed to provide system behaviour simulations, which demonstrates the quantitative relations among factors and errors during the diagnostic process. Regression modelling analysis is used to estimate the quantitative relationships among multi factors and their dependent variables during the diagnostic phase of history taking and physical examinations. The regression models are further applied into quantitative system dynamics modelling ‘stock and flow diagrams’. The quantitative model traces error flows during the diagnostic process, and simulates how the change of one or more variables affects the diagnostic errors and patient outcomes over time. The change of the variables may reflect a change in demand from policy or a proposed external intervention. The results suggest the systemic model has the potential to help understand diagnostic errors, observe model behaviours, and provide risk-free simulation experiments for possible strategies.
4

Kamal, Muhammad. "Software design methods and errors." Thesis, University of Liverpool, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lieberman, Eric (Eric W. ). "Facemail : preventing common errors when composing email." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 85-86).
Facemail is a system designed to investigate and prevent common errors that users make while composing emails. Users often accidentally send email to incorrect recipients by mistyping an email address, accidentally clicking "Reply-to-all" rather than just "Reply", or using the wrong email address altogether. Facemail is a user interface addition to an email client that provides the user with more information about the recipients of their email by showing their actual faces. This form of information is much more usable than the simple text in current displays, and it allows the user to determine whether his email is going to the correct people with only a glance. This thesis discusses the justification for this system, as well as the challenges that arose in making it work. In particular, it discusses how to acquire images of users based on their email address, and how to interact with lists, both in learning their members as well as displaying them to the user. This thesis discusses how Facemail fits into current research as well as how its ideas could be expanded into further research.
by Eric Lieberman.
M.Eng.
6

Jeffrey, Dennis Bernard. "Dynamic state alteration techniques for automatically locating software errors." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1899476671&SrchMode=2&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268336630&clientId=48051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 223-234). Also issued in print.
7

Liu, Jiaqi. "Handling Soft and Hard Errors for Scientific Applications." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483632126075067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cal, Semih. "Evaluating Query Estimation Errors Using Bootstrap Sampling." Youngstown State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1627358871966099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chang, Alice Ai-Yuan. "Models of common errors in a Tetris session." Thesis, Massachusetts Institute of Technology, 1990. https://hdl.handle.net/1721.1/128800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Includes bibliographical references (leaf 71).
by Alice Ai-Yuan Chang.
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
10

Lyons, Laura Christine. "An investigation of systematic errors in machine vision hardware." Thesis, Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/16759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Snowden, D. S. "Knowledge-based diagnosis of semantic errors in ADA programs." Thesis, University of York, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fletcher, Simon. "Computer aided system for intelligent implementation of machine tool error reduction methodologies." Thesis, University of Huddersfield, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sexton, John A. "Detecting errors in software using a parameter checker : an analysis /." Online version of thesis, 1989. http://hdl.handle.net/1850/10585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Di, Giacomo Benedito. "Computer aided calibration and hybrid compensation of geometric errors in coordinate measuring machines." Thesis, University of Manchester, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hassanzadeh, Nezami Setareh. "A Study of Errors, Corrective Feedback and Noticing in Synchronous Computer Mediated Communication." Thesis, Linköpings universitet, Avdelningen för språk och kultur, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-88411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study investigated the different types of errors that EFL learners produce in chat logs and also analyzed the different types of corrective feedback given by the teacher. An eye tracker was employed to study the eye movements of the participants to see how they notice the corrective feedback. This investigation can assist teachers to act better in online classrooms and helps them understand which type of corrective feedback is most likely to result in uptake based on noticing. The results showed that the most common errors in chat logs were related to grammar. It was also found that both recasts and metalinguistic feedback were noticed most of the time during the chat sessions although only a few of them led to uptake in post task session.
16

VANKAMAMIDI, SRIHARSHA. "Fusing Joint Information from Multiple Kinect Sensors to Detect Errors in Exercises." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1477921964956213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Feng, Michelle. "Automatically Fixing Syntax Errors with PEST, a Python Tool for Beginners." Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/scripps_theses/1148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Psycholinguistic research shows that it is unreasonable to expect programmers to easily find minor typos in their otherwise correct code. The Python Error Support Tool PEST was designed and developed to address this. PEST offers an explanation for why the error happened and presents a list of possible fixes that will allow the user’s code to compile. This tool was evaluated by several students with a beginner’s level of expertise in Python, and feedback was generally positive with tangible steps for improvement.
18

Lee, John Sie Yuen 1977. "Automatic correction of grammatical errors in non-native English text." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 99-107).
Learning a foreign language requires much practice outside of the classroom. Computer-assisted language learning systems can help fill this need, and one desirable capability of such systems is the automatic correction of grammatical errors in texts written by non-native speakers. This dissertation concerns the correction of non-native grammatical errors in English text, and the closely related task of generating test items for language learning, using a combination of statistical and linguistic methods. We show that syntactic analysis enables extraction of more salient features. We address issues concerning robustness in feature extraction from non-native texts; and also design a framework for simultaneous correction of multiple error types. Our proposed methods are applied on some of the most common usage errors, including prepositions, verb forms, and articles. The methods are evaluated on sentences with synthetic and real errors, and in both restricted and open domains. A secondary theme of this dissertation is that of user customization. We perform a detailed analysis on a non-native corpus, illustrating the utility of an error model based on the mother tongue. We study the benefits of adjusting the correction models based on the quality of the input text; and also present novel methods to generate high-quality multiple-choice items that are tailored to the interests of the user.
by John Sie Yuen Lee.
Ph.D.
19

Chu, Amy 1980. "Mutual disambiguation of recognition errors in a multimodal navigational agent." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Xie, Chuanlong. "Model checking for regressions when variables are measured with errors." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we investigate model checking problems for parametric single-index regression models when the variables are measured with different types of errors. The large sample behaviours of the test statistics can be used to develop properly centered and scaled model checking procedures. In addition, a dimension reduction model-adaptive strategy is employed, with the special requirements for the models with measurement errors, to improve the proposed testing procedures. This makes the test statistics converge to their weak limit under the null hypothesis with the convergence rates not depending on the dimension of predictor vector. Furthermore, the proposed tests behave like a classical local smoothing test with only one-dimensional predictor. Therefore the proposed methods have potential for alleviating the difficulties associated with high dimensionality in hypothesis testing.. Chapter 2 provides some tests for a parametric single-index regression model when predictors are measured with errors in an additive manner and validation dataset is available. The two proposed tests have consistency rates not depending on the dimension of predictor vector. One of these tests has a bias term that may become arbitrarily large with increasing sample size, but has smaller asymptotic variance. The other test is asymptotically unbiased with larger asymptotic variance. Both are still omnibus against general alternatives. Besides, a systematic study is conducted to give an insight on the effect of the ratio between the size of primary data and the size of validation data on the asymptotic behavior of these tests. Simulation studies are carried out to examine the finite-sample performances of the proposed tests. Also the tests are applied to a real data set about breast cancer with validation data obtained from a nutrition study.. Chapter 3 introduces a minimum projected-distance test for a parametric single-index regression model when predictors are measured with Berkson type errors. The distribution of the measurement error is assumed to be known up to several parameters. This test is constructed by combining the minimum distance test with a dimension reduction model-adaptive strategy. After properly centering, the minimum projected-distance test statistic is asymptotically normal at a convergence rate of order nh^(1/2) and can detect a sequence of local alternatives distinct from the null model with a rate of order n^(-1/2) h^(-1/4) where n is the sample size and h is a sequence of bandwidths tending to 0 as n tends infinity. These rates do not depend on the dimensionality of predictor vector, which implies that the proposed test has potential for alleviating the curse of dimensionality in hypothesis testing in this field. Further, as the test is asymptotically biased, two bias-correction methods are suggested to construct asymptotically unbiased tests. In addition, we discuss some details in the implementation of the proposed tests and then provide a simplified procedure. Simulations indicate desirable finite-sample performances of the tests. Besides, we illustrate the proposed model checking procedures by using two real datasets to illustrate the effects of air pollution on Emphysema.. Chapter 4 provides a nonparametric test for checking a parametric single-index regression model when predictor vector and response are measured with distortion errors. We estimate the true values of response and predictor, and then plug the estimated values into a test statistic to develop a model checking procedure. The dimension reduction model-adaptive strategy is also employed to improve its theoretical properties and finite sample performance. Another interesting observation in this work is that, with properly selected bandwidths and kernel functions in a limited range, the proposed test statistic has the same limiting distribution as that under the classical regression setup without distortion measurement errors. Simulation studies are conducted.
21

Müller, Miguel. "Self-healing Javascript Errors Caused by the Browser Extension Privacy Badger." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As today’s web is filled with privacy-invasive third-party trackers, users are turning to privacy extensions to prevent their browsing habits from leaking. However, research has shown that privacy extensions can decrease website quality and even break meaningful functionality. Our research addresses this problem by performing automated repairs on Javascript code that has been broken by Privacy Badger, a heuristics-based privacy extension. First, we study how the use of Privacy Badger affects the prevalence of Javascript errors on 11 665 urls. We find errors caused by Privacy Badger on 758 (6.5%) urls. We also observe a 74% increase in Javascript errors, and a 27% increase in urls affected by Javascript errors when browsing with Privacy Badger. Using this data, we investigate how BikiniProxy, an automated HTML and Javascript rewriting proxy consisting of five self-healing strategies, performs on errors caused by Privacy Badger. Out of 751 web pages with errors caused by Privacy Badger, 215 (29%) had at least one such error healed by BikiniProxy. Additionally, we recognize a shortcoming of BikiniProxy’s line skipper strategy, and propose an improvement to it. Repairing web pages using our modified version of BikiniProxy reduces the number of errors on 12.9% more urls. Finally, we show that repairing errors using BikiniProxy can restore functionality that has been broken by Privacy Badger. But we can only detect such cases in two urls out of the hundreds repaired, which shows that the repair approach suffers from overfitting in our context. Our most important insight is that privacy extensions can break functionality without blocking any resource that the functionality is dependent on, and that these are the cases where BikiniProxy can restore the functionality.
Eftersom dagens webb genomsyras av integritetskränkande tredjepartsspårare väljer många användare att installera integritetsskyddande webbläsartillägg som förhindrar otillbörlig tillgång till användarnas personliga information. Forskning har dock visat att integritetsskyddande webbläsartillägg kan sänka kvaliteten på hemsidor och förstöra meningsfull funktionalitet. Vårt examensarbete angriper detta problem genom att utföra automatiserade reparationer på Javascript-kod som har förstörts av Privacy Badger, ett heurestikbaserat integritetsskyddande webbläsartillägg. Först studerar vi hur användandet av Privacy Badger påverkar förekomsten av Javascript-fel på 11 665 webbsidor. Vi påträffar Javascript-fel orsakade av Privacy Badger på 758 (6.5%)webbsidor. Dessutom observerar vi en 74-procentig ökning av Javascript-fel, och en 27- procentig ökning av webbsidor påverkade av Javascript-fel, när Privacy Badger används. Vi använder dessa data för att undersöka hur BikiniProxy, en HTML- och Javascript- omskrivningsproxy bestående av fem självläkningsstrategier, presterar på Javascript-fel orsakade av Privacy Badger. Utav 751 webbsidor med Javascript-fel orsakade av Privacy Badger så har 215 (29%) minst ett sådant fel reparerat av BikiniProxy. Baserat på ett tillkortakommande vi identifierar i BikiniProxy så föreslår vi en förbättring av dess line skipperstrategi. Att reparera med vår förbättrade version av BikiniProxy minskar antalet Javascript-fel på 12.9% flerwebbsidor. Slutligen visar vi att reparerandet med BikiniProxy kan återställa funktionalitet som förstörts av Privacy Badger. Däremot kan vi bara upptäcka sådana fall på två webbsidor, vilket visar att reparationsstrategierna lider av överanpassning i vår kontext. Vår viktigaste insikt är att integritetsskyddande webbläsartillägg kan förstöra funktionalitet utan att blockera någon resurs som funktionaliteten är beroende av, och att det är dessa fall där BikiniProxy kan återställa funktionaliteten.
22

Westberg-Bracewell, Linda. "ErrorBuster : a computer program designed to remediate persistent errors of Francophone speakers of English." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Boonsalee, Siwaphong 1974. "Effects of random surface errors on the performance of paraboloidal reflectors." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (leaves 146-151).
A program based on ray tracing has been developed to study the radiation patterns of paraboloidal reflector antennas whose surfaces are subjected to random errors with the emphasis on using an accurate representation of the statistics of the random surface errors. An ensemble of Gaussian random surfaces is created to be used with the Monte Carlo simulation. The average patterns from different surface root-mean-square values are presented for both the co-polarized and cross-polarized fields on the E-plane, H-plane, and 45-degree plane. They are compared with results based on physical optics and the antenna tolerance theory.
by Siwaphong Boonsalee.
M.Eng.and S.B.
24

Wennerström, Hjalmar. "Meteorological impact and transmission errors in outdoor wireless sensor networks." Licentiate thesis, Uppsala universitet, Avdelningen för datorteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-227639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Wireless sensor networks have been deployed outdoors ever since their inception. They have been used in areas such as precision farming, tracking wildlife, and monitoring glaciers. These diverse application areas all have different requirements and constraints, shaping the way in which the sensor network communicates. Yet something they all share is the exposure to an outdoor environment, which at times can be harsh, uncontrolled and difficult to predict. Therefore, understanding the implications of an outdoor environment is an essential step towards reliable wireless sensor network operations. In this thesis we consider aspects of how the environment influence outdoor wireless sensor networks. Specifically, we experimentally study how meteorological factors impact radio links, and find that temperature is most significant. This motivates us to further study and propose a first order model describing the impact of temperature on wireless sensor nodes. We also analyze transmission errors in an outdoor wireless sensor networks, identifying and explaining patterns in the way data gets corrupted. The findings lead to a design and evaluation of an approach for probabilistic recover of corrupt data in outdoor wireless sensor networks. Apart from the experimental findings we have conducted two different outdoor deployments for which large data sets has been collected, containing both link and meteorological measurements.
WISENET
25

Pasupuleti, Venkata Sai Manoj. "Probabilistic approaches for verification of unlikely inserted errors in Hardware Description Languages." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1452182260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Eriksson, Daniel. "Portable BizTalk solutions : Evaluating portable solutions to search for errors in BizTalkplatforms." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This report evaluates possible infrastructures to create portable BizTalk solutions. BizTalk is an integration based software mostly used at larger companies. Errors can occur in BizTalk and experts need an easy and portable solution to identify these. No such solution exists today, and this report focuses on how it could be performed. The results show that various tools need to be used to access information from BizTalk. Information about BizTalk must be protected by access rights, which are preferably controlled from a cloud portal. The cloud portal used in this project is Windows Azure, but other solutions have been considered. Azure has a specialized service to access secure locations, which other provider’s lack. Finally, a prototype application in Windows Phone 8 was developed. The solution has been shown to BizTalk experts, who were enthusiastic by the proposed solution and has proceeded with the project. They are currently analyzing what it would cost to develop a product and what could be charged for such a service.
BizTalk Server är ett integrationsystem som underlättar integrering av olika system. Med hjälp av olika adaptrar kan BizTalk enkelt koppla samman tjänster som inte pratar på samma språk. Fel inträffar i BizTalk och experter behöver en smidig och portabel lösning för att identifiera dessa. Rapporten utvärderar olika infrastrukturer för att skapa mobila övervakningslösningar till BizTalk. Resultaten visar på att flera verktyg måste användas för att ansluta till BizTalk. Informationen som dessa vektyg kommer åt måste skyddas med rättigheter, som helst implementeras i molnet. Molnportalen som används i det här projektet är Windows Azure, men andra molnlösningar har också utvärderats. Azure har en tjänst som är specialiserad på att nå skyddaded nätverk bakom brandväggar, vilket andra leverantörer saknar. Slutligen utvecklades en prototyp i Windows Phone 8 för att demonstrera lösningen. Demonstrationen skedde inför BizTalk-experter, som var intresserade av lösningen och har valt att fortsätta projektet. De har tagit nästa steg i processen, som handlar om att analysera vad det skulle kosta att utveckla en komersiell produkt samt vad som skulle kunna tas betalt för en sådan produkt.
27

Kelley, Anne K. M. Eng Massachusetts Institute of Technology. "A system for classifying and clarifying Python syntax errors for educational purposes." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-66).
Many students learning Python frequently encounter syntax errors but have difficulty understanding the error messages and correcting their code. In this thesis, we designed, implemented, and performed preliminary testing of a system for classification of syntax errors commonly made by beginning coders. Errors are classified by constructing a partial syntax tree and analyzing the node containing the error with respect to its surrounding nodes. The system aims to use the classified errors to provide more precise and instructive error messages to aid students in the debugging process.
by Anne K. Kelley.
M. Eng.
28

Lei, Lei. "Markov Approximations: The Characterization of Undermodeling Errors." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1371.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Pidaparthy, Hemanth. "Recognizing and Detecting Errors in Exercises using Kinect Skeleton Data." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1430912344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Gandhi, Raju. "Reduction / elimination of errors in cost estimates using calibration an algorithmic approach /." Ohio : Ohio University, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1134575761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Muthukumarasamy, Arulkumaran. "IMPACT OF MICROPHONE POSITIONAL ERRORS ON SPEECH INTELLIGIBILITY." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_theses/602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The speech of a person speaking in a noisy environment can be enhanced through electronic beamforming using spatially distributed microphones. As this approach demands precise information about the microphone locations, its application is limited in places where microphones must be placed quickly or changed on a regular basis. Highly precise calibration or measurement process can be tedious and time consuming. In order to understand tolerable limits on the calibration process, the impact of microphone position error on the intelligibility is examined. Analytical expressions are derived by modeling the microphone position errors as a zero mean uniform distribution. Experiments and simulations were performed to show relationships between precision of the microphone location measurement and loss in intelligibility. A variety of microphone array configurations and distracting sources (other interfering speech and white noise) are considered. For speech near the threshold of intelligibility, the results show that microphone position errors with standard deviations less than 1.5cm can limit losses in intelligibility to within 10% of the maximum (perfect microphone placement) for all the microphone distributions examined. Of different array distributions experimented, the linear array tends to be more vulnerable whereas the non-uniform 3D array showed a robust performance to positional errors.
32

Armstrong, Joe. "Making reliable distributed systems in the presence of software errors." Doctoral thesis, KTH, Microelectronics and Information Technology, IMIT, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

The work described in this thesis is the result of aresearch program started in 1981 to find better ways ofprogramming Telecom applications. These applications are largeprograms which despite careful testing will probably containmany errors when the program is put into service. We assumethat such programs do contain errors, and investigate methodsfor building reliable systems despite such errors.

The research has resulted in the development of a newprogramming language (called Erlang), together with a designmethodology, and set of libraries for building robust systems(called OTP). At the time of writing the technology describedhere is used in a number of major Ericsson, and Nortelproducts. A number of small companies have also been formedwhich exploit the technology.

The central problem addressed by this thesis is the problemof constructing reliablesystems from programs which maythemselves contain errors. Constructing such systems imposes anumber of requirements on any programming language that is tobe used for the construction. I discuss these languagerequirements, and show how they are satisfied by Erlang.

Problems can be solved in a programming language, or in thestandard libraries which accompany the language. I argue howcertain of the requirements necessary to build a fault-tolerantsystem are solved in the language, and others are solved in thestandard libraries. Together these form a basis for buildingfault-tolerant software systems.

No theory is complete without proof that the ideas work inpractice. To demonstrate that these ideas work in practice Ipresent a number of case studies of large commerciallysuccessful products which use this technology. At the time ofwriting the largest of these projects is a major Ericssonproduct, having over a million lines of Erlang code. Thisproduct (the AXD301) is thought to be one of the most reliableproducts ever made by Ericsson.

Finally, I ask if the goal of finding better ways to programTelecom applications was fulfilled --- I also point to areaswhere I think the system could be improved.

33

Liljebjörn, Johan, and Hugo Broman. "Mantis The Black-Box Scanner : Finding XSS vulnerabilities through parse errors." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract [en] Background. Penetration testing is a good technique for finding web vulnerabilities. Vulnerability scanners are often used to aid with security testing. The increased scope is becoming more difficult for scanners to handle in a reasonable amount of time. The problem with vulnerability scanners is that they rely on fuzzing to find vulnerabilities. A problem with fuzzing is that: it generates a lot of network traffic; scans can be excruciatingly slow; limited vulnerability detection if the output string is modified due to filtering or sanitization. Objectives. This thesis aims to investigate if an XSS vulnerability scanner can be made more scalable than the current state-of-the-art. The idea is to examine how reflected parameters can be detected, and if a different methodology can be applied to improve the detection of XSS vulnerabilities. The proposed vulnerability scanner is named Mantis. Methods. The research methods used in this thesis are literature review and experiment. In the literature review, we collected information about the investigated problem to help us analyze the identified research gaps. The experiment evaluated the proposed vulnerability scanner with the current state-of-the-art using the dataset, OWASP benchmark. Results. The result shows that reflected parameters can be reliably detected using approximate string matching. Using the parameter mapping, it was possible to detect reflected XSS vulnerabilities to a great extent. Mantis had an average scan time of 78 seconds, OWASP ZAP 95 seconds and Arachni 17 minutes. The dataset had a total of 246 XSS vulnerabilities. Mantis detected the most at 213 vulnerabilities, Arachni detected 183, and OWASP ZAP 137. None of the scanners had any false positives. Conclusions. Mantis has proven to be an efficient vulnerability scanner for detecting XSS vulnerabilities. Focusing on the set of characters that may lead to the exploitation of XSS has proven to be a great alternative to fuzzing. More testing of Mantis is needed to determine the usability of the vulnerability scanner in a real-world scenario. We believe the scanner has the potential to be a great asset for penetration testers in their work.
34

Tejeda, Abiezer. "Correcting Errors Due to Species Correlations in the Marginal Probability Density Evolution." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Synthetic biology is an emerging field that integrates and applies engineering design methods to biological systems. Its aim is to make biology an "engineerable" science. Over the years, biologists and engineers alike have abstracted biological systems into functional models that behave similarly to electric circuits, thus the creation of the subfield of genetic circuits. Mathematical models have been devised to simulate the behavior of genetic circuits in silico. Most models can be classified into deterministic and stochastic models. The work in this dissertation is for stochastic models. Although ordinary differential equation (ODE) models are generally amenable to simu- late genetic circuits, they wrongly assume that a system's chemical species vary continuously and deterministically, thus making erroneous predictions when applied to highly stochastic systems. Stochastic methods have been created to take into account the variability, un- predictability, and discrete nature of molecular populations. The most popular stochastic method is the stochastic simulation algorithm (SSA). These methods provide a single path of the overall pool of possible system's behavior. A common practice is to take several inde- pendent SSA simulations and take the average of the aggregate. This approach can perform iv well in low noise systems. However, it produces incorrect results when applied to networks that can take multiple modes or that are highly stochastic. Incremental SSA or iSSA is a set of algorithms that have been created to obtain ag- gregate information from multiple SSA runs. The marginal probability density evolution (MPDE) algorithm is a subset of iSSA which seeks to reveal the most likely "qualitative" behavior of a genetic circuit by providing a marginal probability function or statistical enve- lope for every species in the system, under the appropriate conditions. MPDE assumes that species are statistically independent given the rest of the system. This assumption is satisfied by some systems. However, most of the interesting biological systems, both synthetic and in nature, have correlated species forming conservation laws. Species correlation imposes con- straints in the system that are broken by MPDE. This work seeks to devise a mathematical method and algorithm to correct conservation constraints errors in MPDE. Furthermore, it aims to identify these constraints a priori and efficiently deliver a trustworthy result faithful to the true behavior of the system.
35

Goldberg, DavidM B. A. Sloan School of Management. "Improving project timelines using Al / ML to detect forecasting errors." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Page 75 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (page 67).
This project focuses on the creation of a novel tool to detect and flag potential errors within Amgen's capacity management forecast data, in an automated manner using statistical analysis, artificial intelligence and machine learning. User interaction allows the tool to learn from experience, improving over time. While the tool created here focuses on a specific set of Amgen's data, the framework, approach and techniques offered herein can more broadly be applied to detect anomalies and errors in other sets of data from across industries and functions. By detecting errors in Amgen's data, the tool improves data robustness and forecasts, which drive decisions, actions and ultimately results. Flagging and correcting this data allows for overcoming errors, which would otherwise damage the accurate allocation of Amgen's human resources to activities in the drug pipeline, ultimately hampering Amgen's ability to develop drugs for patients efficiently. A user interface (UI) dashboard evaluates the tool's performance, tracking the number of errors correctly identified, the accuracy rate, and the estimated business impact. To date the tool has identified 893 corrected errors with a 99.2% accuracy rate and an estimated business impact of $77.798M optimized resources. Using the paradigm of intelligent augmentation (IA), this tool empowers employees by focusing their attention and saving them time. The tool handles the human-impossible task of sifting through thousands of lines and hundreds of thousands of data points. The human user then makes decisions and takes action based on the tool provided output.
by David Goldberg.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
36

Baird, Patrick James Samuel. "Mathematical modelling of the parameters and errors of a contact probe system and its application to the computer simulation of coordinate measuring machines." Thesis, Brunel University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Prabhu, Parrikar Utpal M. "On SNR aware analysis and modeling of 802.11b link-level residual errors." Diss., Connect to online resource - MSU authorized users, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Michigan State University. Dept. of Electrical and Computer Engineering, 2006.
Title from PDF t.p. (viewed on Oct. 30, 2009) Includes bibliographic references (p. 46-47). Also issued in print.
38

Haden, Lonnie A. "A numerical procedure for computing errors in the measurement of pulse time-of-arrival and pulse-width." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bryer, Bevan. "Protection unit for radiation induced errors in flash memory systems." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: Flash memory and the errors induced in it by radiation were studied. A test board was then designed and developed as well as a radiation test program. The system was irradiated. This gave successful results, which confirmed aspects of the study and gave valuable insight into flash memory behaviour. To date, the board is still being used to test various flash devices for radiation-harsh environments. A memory protection unit (MPU) was conceptually designed and developed to morntor flash devices, increasing their reliability in radiation-harsh environments. This unit was designed for intended use onboard a micro-satellite. The chosen flash device for this study was the K9F1208XOA model from SAMSUNG. The MPU was designed to detect, maintain, mitigate and report radiation induced errors in this flash device. Most of the design was implemented in field programmable gate arrays and was realised using VHDL. Simulations were performed to verify the functionality of the design subsystems. These simulations showed that the various emulated errors were handled successfully by the MPU. A modular design methodology was followed, therefore allowing the chosen flash device to be replaced with any flash device, following a small reconfiguration. This also allows parts of the system to be duplicated to protect more than one device.
AFRIKAANSE OPSOMMING: 'n Studie is gemaak van" Flash" geheue en die foute daarop wat deur radiasie veroorsaak word. 'n Toetsbord is ontwerp en ontwikkel asook 'n radiasie toetsprogram waarna die stelsel bestraal is. Die resultate was suksesvol en het aspekte van die studie bevestig en belangrike insig gegee ten opsigte van "flash" komponente in radiasie intensiewe omgewmgs. 'n Geheue Beskermings Eenheid (GBE) is konseptueel ontwerp en ontwikkelom die "flash" komponente te monitor. Dit verhoog die betroubaarheid in radiasie intensiewe omgewings. Die eenheid was ontwerp met die oog om dit aan boord 'n mikro-satelliet te gebruik. Die gekose "flash" komponent vir die studie was die K9F1208XOA model van SAMSUNG. Die GBE is ontwerp om foute wat deur radiasie geïnduseer word in die "flash" komponent te identifiseer, herstel en reg te maak. Die grootste deel van die implementasie is gedoen in "field programmable gate arrays" and is gerealiseer deur gebruik te maak van VHDL. Simulasies is gedoen om die funksionaliteit van die ontwikkelde substelsels te verifieer. Hierdie simulasies het getoon dat die verskeie geëmuleerde foute suksesvol deur die GBE hanteer is. 'n Modulre ontwerpsmetodologie is gevolg sodat die gekose "flash" komponent deur enige ander flash komponent vervang kan word na gelang van 'n eenvoudige herkonfigurasie. Dit stelook dele van die sisteem in staat om gedupliseer te word om sodoende meer as een komponent te beskerm.
40

Khan, Mohammad Ali, and Majid Nasir. "Human Errors and Learnability Evaluation of Authentication System." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Usability studies are important in today’s context. However, the increased security level of authentication systems is reducing the usability level. Thus, to provide secured but yet usable authentication systems is a challenge for researchers to solve till now. Learnability and human errors are influential factors of the usability of authentication systems. There are not many specific studies on the learnability and human errors concentrating on authentication systems. The authors’ aim of this study is to explore the human errors and the learnability situation of authentication systems to contribute to the development of more usable authentication systems. The authors investigated through observations and interviews to achieve the aim of this study. A minimalist portable test lab was developed in order to conduct the observation process in a controlled environment. At the end of the study, the authors showed the list of identified human errors and learnability issues, and provided recommendations, which the authors believe will help researchers to improve the overall usability of authentication systems. To achieve the aim of the study, the authors started with a systematic literature review to gain knowledge on the state of art. For the user study, a direct investigation, in form of observations and interviews was then applied to gather more data. The collected data was then analyzed and interpreted to identify and assess the human errors and the learnability issues.
This study addressed the usability experiences of users by exploring the human errors and the learnability situation of the authentication systems. Authors conducted a case study to explore the situation of human errors and learnability of authentication systems. Observation and interviews were adapted to gather data. Then analysis through SHERPA (to evaluate human errors) and Grossman et al. learnability metric (to evaluate learnability) had been conducted. First, the authors identified the human errors and learnability issues on the authentication systems from user’s perspective, from the gathered raw data. Then further analysis had been conducted on the summary of the data to identify the features of the authentication systems which are affecting the human errors and learnability issues. The authors then compared the two different categories of authentication systems, such as the 1-factor and the multi-factor authentication systems, from the gathered information through analysis. Finally, the authors argued the possible updates of the SHERPA’s human error metric and additional measurable learnability issues comparing to Grossman et al. learnability metrics. The studied authentication systems are not human errors free. The authors identified eight human errors associated with the studied authentication systems and three features of the authentication systems which are influencing the human errors. These errors occurred while the participants in this study took too long time locating the login menu or button or selecting the correct login method, and eventually took too long time to login. Errors also occurred when the participants failed to operate the code generating devices, or failed to retrieve information from errors messages or supporting documents, and/or eventually failed to login. As these human errors are identifiable and predictable through the SHERPA, they can be solved as well. The authors also found the studied authentication systems have learnability issues and identified nine learnability issues associated with them. These issues were identified when very few users could complete the task optimally, or completed without any help from the documentation. Issues were also identified while analyzing the participants’ task completion time after reviewing documentations, operations on code generating devices, and average errors while performing the task. These learnability issues were identified through Grossman et al. learnability metric, and the authors believe more study on the identified learnability issues can improve the learnability of the authentication systems. Overall, the authors believe more studies should be conducted on the identified human errors and learnability issues to improve the overall human errors and learnability situation of the studied authentication systems at presence. Moreover, these issues also should be taken into consideration while developing future authentication systems. The authors believe, in future, the outcome of this study will also help researchers to propose more usable, but yet secured authentication systems for future growth. Finally, authors proposed some potential research ares, which they believe will have important contribution to the current knowledge. In this study, the authors used the SHERPA to identify the human errors. Though the SHERPA (and its metrics) is arguably one of the best methods to evaluate human errors, the authors believe there are scopes of improvements in the SHERPA’s metrics. Human’s perception and knowledge is getting changed, and to meet the challenge, the SHERPA’s human error metrics can be updated as well. Grossman et al. learnability metrics had been used in this study to identify learnability issues. The authors believe improving the current and adding new metrics may identify more learnability issues. Evaluation of learnability issues may have improved if researchers could have agreed upon a single learnability definition. The authors believe more studies should be conducted on the definition of learnability in order to achieve more acceptable definition of the learnability for further research. Finally, more studies should be conducted on the remedial strategies of the identified human errors, and improvement on the identified learnability issues, which the authors believe will help researchers to propose more usable, but yet secured authentication systems for the future growth.
30/1, Shideshwari Lane, Shantinagar, Ramna, Dhaka, Bangladesh, Post Code 1217. Contact: +88017130 16973
41

Olson, Erik Lee. "Computer-assisted decision aids in difficult decision environments: Factors which enhance the probability of decision errors and decision error impact on subjective evaluations of the decision aid." Case Western Reserve University School of Graduate Studies / OhioLINK, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=case1056551622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Alonso, Miguel Jr. "A method for enhancing digital information displayed to computer users with visual refractive errors via spatial and spectral processing." FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/1112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
43

Brewer, Judy. "Metabolic Modeling of Inborn Errors of Metabolism: Carnitine Palmitoyltransferase II Deficiency and Respiratory Chain Complex I Deficiency." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:24078365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The research goal was to assess the current capabilities of a metabolic modeling environment to support exploration of inborn errors of metabolism (IEMs); and to assess whether, drawing on evidence from published studies of EMs, the current capabilities of this modeling environment correlate with clinical measures of energy production, fatty acid oxidation, accumulation of toxic by-products of defective metabolism, and mitigation via therapeutic agents. IEMs comprise several hundred disorders of energy production, often with significant impact on morbidity and mortality. Despite advances in genomic medicine, currently the majority of therapeutic options for IEMs are supportive only, and most only weakly evidenced. Metabolic modeling could potentially offer an in silico alternative for exploring therapeutic possibilities. This research established models of two inborn errors of metabolism (IEMs), carnitine palmitoyltransferase (CPT) II deficiency and respiratory chain complex I deficiency, allowing exploration of combinations of IEMs at different degrees of enzyme deficiency. It utilized a modified version of the human metabolic network reconstruction, Recon 2, which includes known metabolic reactions and metabolites in human cells, and which allows constraint-based modeling within a computational and mathematical representation of human metabolism. It utilized the Matlab-based COBRA (Constraint-based Reconstruction and Analysis) Toolbox 2.0, and a customized suite of functions, to model ATP production, long-chain fatty acid oxidation (LCFA), and acylcarnitine accumulation in response to varying defect levels, inputs and a simulated candidate therapy. Following significant curation of the metabolic network reconstruction and customization of COBRA/Matlab functions, this study demonstrated that ATP production and LCFA oxidation were within expected ranges, and correlated with clinical data for enzyme deficiencies, while acylcarnitine accumulation inversely correlated with the degree of enzyme deficiency; and that it was possible to simulate upregulation of enzyme activity with a therapeutic agent. Results of the curation effort contributed to development of an updated version of the metabolic reconstruction Recon 2. Customization of modeling approaches resulted in a suite of re-usable Matlab functions and scripts usable with COBRA Toolbox methods available for further exploration of IEMs. While this research points to potentially greater suitability of kinetic modeling for some aspects of metabolic modeling of IEMs, it helps to demonstrate potential viability of constraint-based steady state modeling as a means to explore some clinically relevant measures of metabolic function for single and combined inborn errors of metabolism.
44

Boyd, Adriane Amelia. "Detecting and Diagnosing Grammatical Errors for Beginning Learners of German: From Learner Corpus Annotation to Constraint Satisfaction Problems." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1325170396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McMurtrey, Daniel L. "Using Duplication with Compare for On-line Error Detection in FPGA-based Designs." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Space destined FPGA-based systems must employ redundancy techniques to account for the effects of upsets caused by radiated environments. Error detection techniques can be used to alert external systems to the presence of these upsets. Readback with compare is an error detection technique commonly employed in FPGA-based designs. This work introduces duplication with compare (DWC) as an automated on-line error detection technique that can be used as an alternative to readback with compare. This work also introduces a set of metrics that is used to quantify the effectiveness and coverage of this error detection technique. A tool is presented that automatically inserts duplication with compare into a user's design. Duplication with compare is shown to correctly detect over 99.9% of errors caused by configuration upsets at a hardware cost of approximately 2X. System designers can apply duplication with compare to designs using this tool to increase the reliability and availability of their systems while minimizing resource usage and power.
46

Guichard, Jonathan. "Quality Assessment of Conversational Agents : Assessing the Robustness of Conversational Agents to Errors and Lexical Variability." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Assessing a conversational agent’s understanding capabilities is critical, as poor user interactions could seal the agent’s fate at the very beginning of its lifecycle with users abandoning the system. In this thesis we explore the use of paraphrases as a testing tool for conversational agents. Paraphrases, which are different ways of expressing the same intent, are generated based on known working input by performing lexical substitutions and by introducing multiple spelling divergences. As the expected outcome for this newly generated data is known, we can use it to assess the agent’s robustness to language variation and detect potential understanding weaknesses. As demonstrated by a case study, we obtain encouraging results as it appears that this approach can help anticipate potential understanding shortcomings, and that these shortcomings can be addressed by the generated paraphrases.
Att bedöma en konversationsagents språkförståelse är kritiskt, eftersom dåliga användarinteraktioner kan avgöra om agenten blir en framgång eller ett misslyckande redan i början av livscykeln. I denna rapport undersöker vi användningen av parafraser som ett testverktyg för dessa konversationsagenter. Parafraser, vilka är olika sätt att uttrycka samma avsikt, skapas baserat på känd indata genom att utföra lexiska substitutioner och genom att introducera flera stavningsavvikelser. Eftersom det förväntade resultatet för denna indata är känd kan vi använda resultaten för att bedöma agentens robusthet mot språkvariation och upptäcka potentiella förståelssvagheter. Som framgår av en fallstudie får vi uppmuntrande resultat, eftersom detta tillvägagångssätt verkar kunna bidra till att förutse eventuella brister i förståelsen, och dessa brister kan hanteras av de genererade parafraserna.
47

Feng, Yunyi. "Identification of Medical Coding Errors and Evaluation of Representation Methods for Clinical Notes Using Machine Learning." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1555421482252775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Martwick, Andrew Wayne. "Clock Jitter in Communication Systems." PDXScholar, 2018. https://pdxscholar.library.pdx.edu/open_access_etds/4375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For reliable digital communication between devices, the sources that contribute to data sampling errors must be properly modeled and understood. Clock jitter is one such error source occurring during data transfer between integrated circuits. Clock jitter is a noise source in a communication link similar to electrical noise, but is a time domain noise variable affecting many different parts of the sampling process. Presented in this dissertation, the clock jitter effect on sampling is modeled for communication systems with the degree of accuracy needed for modern high speed data communication. The models developed and presented here have been used to develop the clocking specifications and silicon budgets for industry standards such as PCI Express, USB3.0, GDDR5 Memory, and HBM Memory interfaces.
49

Olsson, Tim, and Konrad Magnusson. "Training Artificial Neural Networks with Genetic Algorithms for Stock Forecasting : A comparative study between genetic algorithms and the backpropagation of errors algorithms for predicting stock prices." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Accurate prediction of future stock market prices is of great importance to traders. The process can be automated using articial neural networks. However, the conventional backward propagation of errors algorithm commonly used for training the networks suffers from the local minima problem. This study investigates whether investing more computational resources into training an ar-ticial neural network using genetic algorithms over the conventional algorithm,to avoid the local minima problem, can result in higher prediction accuracy. The results indicate that there is no signicant increase in accuracy to gain by investing resources into training with genetic algorithms, using our proposed model.
50

Shiyanovskii, Yuriy. "Reliability of SRAMs and 3D TSV ICS: Design Protection from Soft Errors and 3D Thermal Modeling." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1334891947.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography