We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/.
Even the smallest donation is hugely appreciated!
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Select a source type:
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer errors.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
The tag cloud allows you accessing even more related research topics, and the appropriate buttons after every section of the page allow consulting extended lists of books, articles, etc. on the chosen topic.
The Levinson principle generally can be used to compute recursively the solution of linear equations. It can also be used to update the error terms directly. This is used to do single-channel deconvolution directly on seismic data without computing or applying a digital filter. Multichannel predictive deconvolution is used for seismic multiple attenuation. In a standard procedure, the prediction-error filter matrices are computed with a Levinson recursive algorithm, using a covariance matrix of the input data. The filtered output is the prediction errors or the nonpredictable part of the data. Starting with the classical Levinson recursion,wehave derived new algorithms for direct recursive calculationof the prediction errors without computing the data covariance-matrix or computing the prediction-error filters. One algorithm generates recursively the one-step forward and backward predic-tion errors and the L-step forward prediction error, computing only the filter matrices with the highest index. A numerically more stable algorithm uses reduced QR decomposition or singular-value decomposition (SVD) in a direct recursive computation of the prediction errors without computing any filter matrix. The new, stable, predictive algorithms require more arithmetic opera-tions in the computer, but the computer programs and data flow are much simpler than for standard predictive deconvolution.
RuDusky, Basil M., and Basil M. RuDusky. "Errors of Computer Electrocardiography." Angiology 48, no. 12 (December 1997): 1045–50. http://dx.doi.org/10.1177/000331979704801204.
Correia, John A., Nathaniel M. Alpert, Richard B. Buxton, and Robert H. Ackerman. "Analysis of Some Errors in the Measurement of Oxygen Extraction and Oxygen Consumption by the Equilibrium Inhalation Method." Journal of Cerebral Blood Flow & Metabolism 5, no. 4 (December 1985): 591–99. http://dx.doi.org/10.1038/jcbfm.1985.88.
Some sources of error in the equilibrium inhalation method for the measurement of oxygen extraction fraction and CMRO2 by positron emission computed tomography scanning have been evaluated by computer simulation. Emphasis has been placed on errors that have not been thoroughly studied in past work. These include effects of random statistical errors, systematic errors in arterial blood radioactivity concentrations, and errors due to perturbations of the equilibrium state, to tissue inhomogeneity, and to subject motion.
Qin, Li Juan. "Robustness Problem Argumentation from Image Quantization Errors in Vision Location." Advanced Materials Research 225-226 (April 2011): 1332–35. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.1332.
In our vision location system, error is inevitable. Image quantization errors play an important role in computer vision field. Quantization errors are the primary sources that affect the precision of pose estimation and they are inherent and unavoidable. It is important to analysis on the effect of this error on compute process. In this paper, Robustness problem argumentation in vision location is presented in detail. Then we introduce image quantization error. Robustness mathematical model for vision location is set up at last.
SNAPPER, JOHN W. "RESPONSIBILITY FOR COMPUTER-BASED ERRORS." Metaphilosophy 16, no. 4 (October 1985): 289–95. http://dx.doi.org/10.1111/j.1467-9973.1985.tb00175.x.
Turner, Jerry L. "The Impact of Materiality Decisions on Financial Ratios: A Computer Simulation." Journal of Accounting, Auditing & Finance 12, no. 2 (April 1997): 125–47. http://dx.doi.org/10.1177/0148558x9701200202.
This study examines the extent to which immaterial uncorrected errors may combine to affect specific financial ratios. A simulation is performed in which three balance sheet accounts and three related income statement accounts are seeded with immaterial errors. The magnitudes of the errors are controlled so the financial statement account balances are materially correct both individually and in the aggregate. The study examines six materiality heuristics for each of three industry classifications and three different error distribution patterns. For each heuristic/industry combination and error distribution pattern, a 95 percent confidence interval is generated for nine financial ratios. Results indicate that immaterial errors may combine to create substantial variances in some ratios. Profitability ratios based on income statement accounts display wide confidence intervals, while solvency ratios based on balance sheet accounts display relatively narrow intervals. Comparison between a standard normal distribution and a nonsymmetrical error distribution indicates that ratio variances are substantial and sensitive to error patterns even when errors are immaterial. Tests for equality of variances identify significant differences between heuristic methods and between industries. When making the decision regarding requiring entry or waiving discovered errors, the auditor should consider the impact of such errors not only on financial statement balances, but on the ways users may combine those balances.
Classical supervised learning from a training set of labelled examples assumes that the labels are correct. But in reality labelling errors may originate, for example, from human mistakes, diverging human opinions, or errors of the measuring instruments. In such cases the training set is misleading and in consequence the learning may suffer. In this thesis we consider probabilistic modelling of random label noise. The goal of this research is two-fold. First, to develop new improved algorithms and architectures from a principled footing which are able to detect and bypass the unwanted effects of mislabelling. Second, to study the performance of such methods both empirically and theoretically. We build upon two classical probabilistic classifiers, the normal discriminant analysis and the logistic regression and introduce the label-noise robust versions of these classifiers. We also develop useful extensions such as a sparse extension and a kernel extension in order to broaden applicability of the robust classifiers. Finally, we devise an ensemble of the robust classifiers in order to understand how the robust models perform collectively. Theoretical and empirical analysis of the proposed models show that the new robust models are superior to the traditional approaches in terms of parameter estimation and classification performance.
Afifi, Faten Helmy. "Detecting errors in nonlinear functions for computer software." Case Western Reserve University School of Graduate Studies / OhioLINK, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=case1060013182.
Diagnostic accuracy is an important index of the quality of health care service. Missed, wrong or delayed diagnosis has a direct effect on patient safety. Diagnostic errors have been discussed at length; however it still lacks a systemic research approach. This thesis takes the diagnostic process as a system and develops a systemic model of diagnostic errors by implementing system dynamics modelling combined with regression analysis. It aims to propose a better way of studying diagnostic errors as well as a deeper understanding of how factors affect the number of possible errors at each step of the diagnostic process and how factors contribute to patient outcomes in the end. It is executed following two parts: In the first part, a qualitative model is developed to demonstrate how errors can happen during the diagnostic process; in other words, the model illustrates the connections among key factors and dependent variables. It starts from discovering key factors of diagnostic errors, producing a hierarchical list of factors, and then illustrates interrelation loops that show how relevant factors are linked with errors. The qualitative model is based on the findings of a systematic literature review and further refined by experts’ reviews. In the second part, a quantitative model is developed to provide system behaviour simulations, which demonstrates the quantitative relations among factors and errors during the diagnostic process. Regression modelling analysis is used to estimate the quantitative relationships among multi factors and their dependent variables during the diagnostic phase of history taking and physical examinations. The regression models are further applied into quantitative system dynamics modelling ‘stock and flow diagrams’. The quantitative model traces error flows during the diagnostic process, and simulates how the change of one or more variables affects the diagnostic errors and patient outcomes over time. The change of the variables may reflect a change in demand from policy or a proposed external intervention. The results suggest the systemic model has the potential to help understand diagnostic errors, observe model behaviours, and provide risk-free simulation experiments for possible strategies.
Kamal, Muhammad. "Software design methods and errors." Thesis, University of Liverpool, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317143.
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006. Includes bibliographical references (p. 85-86). Facemail is a system designed to investigate and prevent common errors that users make while composing emails. Users often accidentally send email to incorrect recipients by mistyping an email address, accidentally clicking "Reply-to-all" rather than just "Reply", or using the wrong email address altogether. Facemail is a user interface addition to an email client that provides the user with more information about the recipients of their email by showing their actual faces. This form of information is much more usable than the simple text in current displays, and it allows the user to determine whether his email is going to the correct people with only a glance. This thesis discusses the justification for this system, as well as the challenges that arose in making it work. In particular, it discusses how to acquire images of users based on their email address, and how to interact with lists, both in learning their members as well as displaying them to the user. This thesis discusses how Facemail fits into current research as well as how its ideas could be expanded into further research. by Eric Lieberman. M.Eng.
Jeffrey, Dennis Bernard. "Dynamic state alteration techniques for automatically locating software errors." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1899476671&SrchMode=2&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268336630&clientId=48051.
Thesis (Ph. D.)--University of California, Riverside, 2009. Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 223-234). Also issued in print.
Liu, Jiaqi. "Handling Soft and Hard Errors for Scientific Applications." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483632126075067.
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990. Includes bibliographical references (leaf 71). by Alice Ai-Yuan Chang. Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Lyons, Laura Christine. "An investigation of systematic errors in machine vision hardware." Thesis, Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/16759.
AbstractWe present a detailed study of roundoff errors in probabilistic floating-point computations. We derive closed-form expressions for the distribution of roundoff errors associated with a random variable, and we prove that roundoff errors are generally close to being uncorrelated with their generating distribution. Based on these theoretical advances, we propose a model of IEEE floating-point arithmetic for numerical expressions with probabilistic inputs and an algorithm for evaluating this model. Our algorithm provides rigorous bounds to the output and error distributions of arithmetic expressions over random variables, evaluated in the presence of roundoff errors. It keeps track of complex dependencies between random variables using an SMT solver, and is capable of providing sound but tight probabilistic bounds to roundoff errors using symbolic affine arithmetic. We implemented the algorithm in the PAF tool, and evaluated it on FPBench, a standard benchmark suite for the analysis of roundoff errors. Our evaluation shows that PAF computes tighter bounds than current state-of-the-art on almost all benchmarks.
Sage, Kingsley. "Dealing with Errors." In Undergraduate Topics in Computer Science, 99–122. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13304-7_6.
Turner, Peter R., Thomas Arildsen, and Kathleen Kavanagh. "Number Representations and Errors." In Texts in Computer Science, 15–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89575-8_2.
de Alfaro, Luca, Thomas A. Henzinger, and Freddy Y. C. Mang. "Detecting Errors Before Reaching Them." In Computer Aided Verification, 186–201. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10722167_17.
Doerr, Daniel, Ilan Gronau, Shlomo Moran, and Irad Yavneh. "Stochastic Errors vs. Modeling Errors in Distance Based Phylogenetic Reconstructions." In Lecture Notes in Computer Science, 49–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23038-7_5.
Hancock, Edwin R., and Richard C. Wilson. "Rectifying structural matching errors." In Recent Developments in Computer Vision, 353–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-60793-5_89.
Gribov, Alexander, and Eugene Bodansky. "Vectorization and Parity Errors." In Lecture Notes in Computer Science, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11767978_1.
Posocco, Nicolas, and Antoine Bonnefoy. "Estimating Expected Calibration Errors." In Lecture Notes in Computer Science, 139–50. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86380-7_12.
Forkner, John F., and Richard C. Juergens. "Computer Simulation Of Manufacturing Errors." In 1988 Los Angeles Symposium--O-E/LASE '88, edited by Robert E. Fischer and Donald C. O'Shea. SPIE, 1988. http://dx.doi.org/10.1117/12.944338.
Polastro, Mateus, and Nalvo Almeida Jr. "OCR errors and their effects on computer forensics." In The Sixth International Conference on Forensic Computer Science. ABEAT, 2011. http://dx.doi.org/10.5769/c2011012.
Cao, Shu-Wen, Wen-Ming Yao, and Tie-Bang Xie. "Computer-aided evaluation of spiral surface profile errors." In Measurement Technology and Intelligent Instruments, edited by Li Zhu. SPIE, 1993. http://dx.doi.org/10.1117/12.156448.
Afifi, Faten H., Lee J. White, and Steven J. Zeil. "Testing for linear errors in nonlinear computer programs." In the 14th international conference. New York, New York, USA: ACM Press, 1992. http://dx.doi.org/10.1145/143062.143096.
Afifi, F. H., S. J. Zeil, and L. J. White. "Testing for linear errors in nonlinear computer programs." In International Conference on Software Engineering. IEEE, 1992. http://dx.doi.org/10.1109/icse.1992.753492.
Griffin, Jean. "Worked Examples with Errors for Computer Science Education." In ICER '15: International Computing Education Research Conference. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2787622.2787741.
Felipe Kraus, Luiz, Bruno Schafaschek, and Samuel Da Silva Feitosa. "Desenvolvimento de um Gerador de Programas Aleatórios em Java." In Computer on the Beach. São José: Universidade do Vale do Itajaí, 2021. http://dx.doi.org/10.14210/cotb.v12.p485-487.
With great advances in the computer science area where technologicalsystems are becoming more and more complex, tests are hardto perform. The problem is even more serious in critical systems,such as flight control or nuclear systems, where an error can causecatastrophic damage in our society. Currently, two techniques areoften used for software validation: testing and software verification.This project aims the testing area, generating random programs tobe used as input to property-based tests, in order to detect errorsin systems and libraries, minimizing the possibility of errors. Morespecifically, Java programs will be automatically generated from existentclasses and interfaces, considering all syntactic and semanticconstraints of the language.
Basin, David, Saa Radomirovic, and Lara Schmid. "Modeling Human Errors in Security Protocols." In 2016 IEEE 29th Computer Security Foundations Symposium (CSF). IEEE, 2016. http://dx.doi.org/10.1109/csf.2016.30.
Railing, Brian. "Session details: Paper Session: Errors." In SIGCSE '18: The 49th ACM Technical Symposium on Computer Science Education. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3247851.
Kelly, A. E., D. Sleeman, R. D. Ward, and R. Martinak. TPIXIE: A Computer Program to Teach Diagnosis of Algebra Errors. Fort Belvoir, VA: Defense Technical Information Center, July 1988. http://dx.doi.org/10.21236/ada199015.
Lozier, Daniel W., and Peter R. Turner. Error-bounding in level-index computer arithmetic. Gaithersburg, MD: National Institute of Standards and Technology, 1995. http://dx.doi.org/10.6028/nist.ir.5724.
Viterbi, Andrew J., Jack K. Wolf, Lyle J. Fredrickson, Jeff A. Levin, and Robert D. Blakeney. Research in Mathematics and Computer Science: Calculation of the Probability of Undetected Error for Certain Error Detection Codes. Phase 2. Fort Belvoir, VA: Defense Technical Information Center, May 1991. http://dx.doi.org/10.21236/ada238234.
Johannesson, G., and D. Lucas. Detecting and Testing for Structural Error in Computer Models with Application to the Community Atmospheric Model. Office of Scientific and Technical Information (OSTI), March 2014. http://dx.doi.org/10.2172/1129137.