To see the other types of publications on this topic, follow the link: Design error.

Dissertations / Theses on the topic 'Design error'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Design error.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tarnoff, David. "Episode 8.01 – Intro to Error Detection." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bastos, Rodrigo Possamai. "Design of a soft-error robust microprocessor." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/8127.

Full text
Abstract:
O avanço das tecnologias de circuitos integrados (CIs) levanta importantes questões relacionadas à confiabilidade e à robustez de sistemas eletrônicos. A diminuição da geometria dos transistores, a redução dos níveis de tensão, as menores capacitâncias e portanto menores correntes e cargas para alimentar os circuitos, além das freqüências de relógio elevadas, têm tornado os CIs mais vulneráveis a falhas, especialmente àquelas causadas por ruído elétrico ou por efeitos induzidos pela radiação. Os efeitos induzidos pela radiação conhecidos como Soft Single Event Effects (Soft SEEs) podem ser classificados em: Single Event Upsets (SEUs) diretos em nós de elementos de armazenagem que resultam em inversões de bits; e pulsos transientes Single Event Transients (SETs) em qualquer nó do circuito. Especialmente SETs em circuitos combinacionais podem se propagar até os elementos de armazenagem e podem ser capturados. Estas errôneas armazenagens podem também serem chamadas de SEUs indiretos. Falhas como SETs e SEUs podem provocar erros em operações funcionais de um CI. Os conhecidos Soft Errors (SEs) são caracterizados por valores armazenados erradamente em elementos de memória durante o uso do CI. SEs podem produzir sérias conseqüências em aplicações de CIs devido à sua natureza não permanente e não recorrente. Por essas razões, mecanismos de proteção para evitar SEs através de técnicas de tolerância a falhas, no mínimo em um nível de abstração do projeto, são atualmente fundamentais para melhorar a confiabilidade de sistemas. Neste trabalho de dissertação, uma versão tolerante a falhas de um microprocessador 8-bits de produção em massa da família M68HC11 foi projetada. A arquitetura é capaz de tolerar SETs e SEUs. Baseado nas técnicas de Redundância Modular Tripla (TMR) e Redundância no Tempo (TR), um esquema de proteção foi projetado e implementado em alto nível no microprocessador alvo usando apenas portas lógicas padrões. O esquema projetado preserva as características da arquitetura padrão de tal forma que a reusabilidade das aplicações do microprocessador é garantida. Um típico fluxo de projeto de circuitos integrados foi desenvolvido através de ferramentas de CAD comerciais. Testes funcionais e injeções de falhas através da simulação de execuções de benchmarks foram realizados como um teste de verificação do projeto. Além disto, detalhes do projeto do circuito integrado tolerante a falhas e resultados em área, performance e potência foram comparados com uma versão não protegida do microprocessador. A área do core aumentou 102,64 % para proteger o circuito alvo contra SETs e SEUs. A performance foi degrada em 12,73 % e o consumo de potência cresceu cerca de 49 % para um conjunto de benchmarks. A área resultante do chip robusto foi aproximadamente 5,707 mm².
The advance of the IC technologies raises important issues related to the reliability and robustness of electronic systems. The transistor scale by shrinking its geometry, the voltage reduction, the lesser capacitances and therefore smaller currents and charges to supply the circuits, besides the higher clock frequencies, have made the IC more vulnerable to faults, especially those faults caused by electrical noise or radiationinduced effects. The radiation-induced effects known as Soft Single Event Effects (Soft SEEs) can be classified into: direct Single Event Upsets (SEUs) at nodes of storage elements that result in bit flips; and Single Event Transient (SET) pulses at any circuit node. Especially SETs on combinational circuits might propagate itself up to the storage elements and might be captured. These erroneous storages can be also called indirect SEUs. Faults like SETs and SEUs can provoke errors in functional operations of an IC. The known Soft Errors (SEs) are characterized by values stored wrongly on memory elements during the use of the IC. They can make serious consequences in IC applications due to their non-permanent and non-recurring nature. By these reasons, protection mechanisms to avoid SEs by using fault-tolerance techniques, at least in one abstraction level of the design, are currently fundamental to improve the reliability of systems. In this dissertation work, a fault-tolerant IC version of a mass-produced 8-bit microprocessor from the M68HC11 family was designed. It is able to tolerate SETs and SEUs. Based on the Triple Modular Redundancy (TMR) and Time Redundancy (TR) fault-tolerance techniques, a protection scheme was designed and implemented at high level in the target microprocessor by using only standard logic gates. The designed scheme preserves the standard-architecture characteristics in such way that the reusability of microprocessor applications is guaranteed. A typical IC design flow was developed by means of commercial CAD tools. Functional testing and fault injection simulations through benchmark executions were performed as a design verification testing. Furthermore, fault-tolerant IC design issues and results in area, performance and power were compared with a non-protected microprocessor version. The core area increased by 102.64 % to protect the target circuit against SETs and SEUs. The performance was degraded in 12.73 % and the power consumption grew around 49 % for a set of benchmarks. The resulting area of the robust chip was approximately 5.707 mm².
APA, Harvard, Vancouver, ISO, and other styles
3

Herman, Eric. "Efficient Error Analysis Assessment in Optical Design." Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/321608.

Full text
Abstract:
When designing a lens, cost and manufacturing concerns are extremely challenging, especially with radical optical designs. The tolerance process is the bridge between design and manufacturing. Three techniques which improve the interaction between lens design and engineers are successfully shown in this thesis along with implementation of these techniques. First, a method to accurately model optomechanical components within lens design is developed and implemented. Yield improvements are shown to increase by approximately 3% by modeling optomechanical components. Second, a method utilizing aberration theory is applied to discover potential tolerance sensitivity of an optical system through the design process. The use of aberration theory gives an engineer ways to compensate for errors. Third, a method using tolerance grade mapping is applied to error values of an optical system. This mapping creates a simplified comparison method between individual tolerances and lens designs.
APA, Harvard, Vancouver, ISO, and other styles
4

Yankopolus, Andreas George. "Adaptive Error Control for Wireless Multimedia." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5237.

Full text
Abstract:
Future wireless networks will be required to support multimedia traffic in addition to traditional best-effort network services. Supporting multimedia traffic on wired networks presents a large number of design problems, particularly for networks that run connectionless data transport protocols such as the TCP/IP protocol suite. These problems are magnified for wireless links, as the quality of such links varies widely and uncontrollably. This dissertation presents new tools developed for the design and realization of wireless networks including, for the first time, analytical channel models for predicting the efficacy of error control codes, interleaving schemes, and signalling protocols, and several novel algorithms for matching and adapting system parameters (such as error control and frame length) to time-varying channels and Quality of Service (QoS) requirements.
APA, Harvard, Vancouver, ISO, and other styles
5

Ling, Xiang. "Adaptive design in dose-response studies." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133365136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meyer, Jan. "Textile pressure sensor : design, error modeling and evaluation /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=18050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leeke, Matthew. "Towards the design of efficient error detection mechanisms." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/52394/.

Full text
Abstract:
The pervasive nature of modern computer systems has led to an increase in our reliance on such systems to provide correct and timely services. Moreover, as the functionality of computer systems is being increasingly defined in software, it is imperative that software be dependable. It has previously been shown that a fault intolerant software system can be made fault tolerant through the design and deployment of software mechanisms implementing abstract artefacts known as error detection mechanisms (EDMs) and error recovery mechanisms (ERMs), hence the design of these components is central to the design of dependable software systems. The EDM design problem, which relates to the construction of a boolean predicate over a set of program variables, is inherently difficult, with current approaches relying on system specifications and the experience of software engineers. As this process necessarily entails the identification and incorporation of program variables by an error detection predicate, this thesis seeks to address the EDM design problem from a novel variable-centric perspective, with the research presented supporting the thesis that, where it exists under the assumed system model, an efficient EDM consists of a set of critical variables. In particular, this research proposes (i) a metric suite that can be used to generate a relative ranking of the program variables in a software with respect to their criticality, (ii) a systematic approach for the generation of highly-efficient error detection predicates for EDMs, and (iii) an approach for dependability enhancement based on the protection of critical variables using software wrappers that implement error detection and correction predicates that are known to be efficient. This research substantiates the thesis that an efficient EDM contains a set of critical variables on the basis that (i) the proposed metric suite is able, through application of an appropriate threshold, to identify critical variables, (ii) efficient EDMs can be constructed based only on the critical variables identified by the metric suite, and (iii) the criticality of the identified variables can be shown to extend across a software module such that an efficient EDM designed for that software module should seek to determine the correctness of the identified variables.
APA, Harvard, Vancouver, ISO, and other styles
8

Altice, Nathan. "I Am Error." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/405.

Full text
Abstract:
I Am Error is a platform study of the Nintendo Family Computer (or Famicom), a videogame console first released in Japan in July 1983 and later exported to the rest of the world as the Nintendo Entertainment System (or NES). The book investigates the underlying computational architecture of the console and its effects on the creative works (e.g. videogames) produced for the platform. I Am Error advances the concept of platform as a shifting configuration of hardware and software that extends even beyond its ‘native’ material construction. The book provides a deep technical understanding of how the platform was programmed and engineered, from code to silicon, including the design decisions that shaped both the expressive capabilities of the machine and the perception of videogames in general. The book also considers the platform beyond the console proper, including cartridges, controllers, peripherals, packaging, marketing, licensing, and play environments. Likewise, it analyzes the NES’s extension and afterlife in emulation and hacking, birthing new genres of creative expression such as ROM hacks and tool-assisted speed runs. I Am Error considers videogames and their platforms to be important objects of cultural expression, alongside cinema, dance, painting, theater and other media. It joins the discussion taking place in similar burgeoning disciplines—code studies, game studies, computational theory—that engage digital media with critical rigor and descriptive depth. But platform studies is not simply a technical discussion—it also keeps a keen eye on the cultural, social, and economic forces that influence videogames. No platform exists in a vacuum: circuits, code, and console alike are shaped by the currents of history, politics, economics, and culture—just as those currents are shaped in kind.
APA, Harvard, Vancouver, ISO, and other styles
9

Garufi, David (David J. ). "Error propagation in concurrent product development." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118550.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 68).
System dynamics modelling is used to explore varying levels of concurrency in a typical design-build-produce project introducing a new product. Faster product life-cycles and demanding schedules have introduced the importance of beginning downstream work (build/manufacturing) while upstream work (design) is incomplete. Conceivably, this project concurrency improves project schedule and cost by forcing rework to be discovered and completed earlier in the project life. Depending on the type of project, some design errors may only be discoverable once the build phase has begun its work. Namely, systemic errors and assembly errors that cannot be easily discovered within the design phase. Pushing build activity earlier in the project allows the rework to be discovered earlier in the project, shortening the overall effort required to complete the project. A mathematical simulation, created using Vensim@ system modeling software, was created by James Lyneis to simulate two-phase rework cycles. The model was tuned to match data based on a disguised real project. Various start dates (as a function of project percentage complete) for downstream phases were explored to find optimal levels of concurrency. Project types were varied by exploring three levels of "rework discoverable within the design phase" to cover a range of project types. The simulation found that for virtually all project types, significant schedule and effort benefits can be gained by introducing the downstream phase as early as 30% to 40% into the project progress and ramping downstream effort over an extended period of time.
by David Garufi.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
10

Mathew, Jimson. "Design techniques for low power on-chip error correction." Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492442.

Full text
Abstract:
As integrated circuit density increases, digital circuits characterized by high operating frequencies and low voltage levels will be increasingly susceptible to faults. Furthermore, it has recently been shown that for many digital signature and identification schemes an attacker can inject faults into the hardware and the resulting incorrect outputs may completely expose their secrets. On-chip error masking techniques such as error correction could be one of the options to mitigate the above problems. To this end, this thesis presents a framework of techniques to design error circuits.
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Christopher Chuan-Chi 1968. "Active vision inspection: Planning, error analysis, and tolerance design." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282424.

Full text
Abstract:
Inspection is a process used to determine whether a component deviates from a given set of specifications. In industry, we usually use a coordinate measuring machine (CMM) to inspect CAD-based models, but inspection using vision sensors has recently drawn more attention because of advances that have been made in computer and imaging technologies. In this dissertation, we introduce active vision inspection for CAD-based three-dimensional models. We divide the dissertation into three major components: (i) planning, (ii) error analysis, and (iii) tolerance design. In inspection planning, the inputs are boundary representation (object centered representation) and an aspect graph (viewer centered representation) of the inspected component; the output is a sensor arrangement for dimensioning a set of topologic entities. In planning, we first use geometric reasoning and object oriented representation to determine a set of topologic entities (measurable entities) to be dimensioned based on the manufactured features on the component (such as slot, pocket, hole etc.) and their spatial relationships. Using the aspect graph, we obtain a set of possible sensor settings and determine an optimized set of sensor settings (sensor arrangement) for dimensioning the measurable entities. Since quantization errors and displacement errors are inherent in an active vision system, we analyze and model the density functions of these errors based on their characteristics and use them to determine the accuracy of inspection for a given sensor setting. In addition, we utilize hierarchical interval constraint networks for tolerance design. We redefine network satisfaction and constraint consistency for the application in tolerance design and develop new forward and backward propagation techniques for tolerance analysis and tolerance synthesis, respectively.
APA, Harvard, Vancouver, ISO, and other styles
12

Lloyd, Jeffrey (Jeffrey M. ). "Error propagation of optimal system design in a hierarchical enterprise." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/43096.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2007.
Includes bibliographical references (p. 62-63).
Increased computing power has helped virtual engineering become common practice amongst product development firms. However, while capabilities increase, the desire to simulate even larger systems has increased as well. To deal with the complexity and size of these systems, several techniques have been developed to decompose the system into smaller, more tractable subsystems. The drawback of this approach is a substantial decrease in computational efficiency. Therefore the use of simplified models is encouraged and often required to reach convergence.In this thesis, a test model is introduced where different forms of error can be introduced at each level. Error derived from both measurement inaccuracy and modeling inaccuracy is examined coupled with the effect of system constraints as well. A hierarchical decomposition method is selected for its similarity to a typical enterprise organizational structure. In this manner, the results of the examination should be applicable to both system engineering methods and enterprise level problems. The direction of error propagation within the hierarchical decomposition is determined and the effects of robust design considerations and simple system constraints are revealed.
by Jeffrey Lloyd.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
13

Feng, Chi S. M. Massachusetts Institute of Technology. "Optimal Bayesian experimental design in the presence of model error." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97790.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 87-90).
The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction. We propose an information theoretic framework and algorithms for robust optimal experimental design with simulation-based models, with the goal of maximizing information gain in targeted subsets of model parameters, particularly in situations where experiments are costly. Our framework employs a Bayesian statistical setting, which naturally incorporates heterogeneous sources of information. An objective function reflects expected information gain from proposed experimental designs. Monte Carlo sampling is used to evaluate the expected information gain, and stochastic approximation algorithms make optimization feasible for computationally intensive and high-dimensional problems. A key aspect of our framework is the introduction of model calibration discrepancy terms that are used to "relax" the model so that proposed optimal experiments are more robust to model error or inadequacy. We illustrate the approach via several model problems and misspecification scenarios. In particular, we show how optimal designs are modified by allowing for model error, and we evaluate the performance of various designs by simulating "real-world" data from models not considered explicitly in the optimization objective.
by Chi Feng.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

Tarnoff, David. "Episode 7.06 – Stupid Binary Tricks." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rainey, Cameron Scott. "Error Estimations in the Design of a Terrain Measurement System." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50501.

Full text
Abstract:
Terrain surface measurement is an important tool in vehicle design work as well as pavement classification and health monitoring. �Non-deformable terrains are the primary excitation to vehicles traveling over it, and therefore it is important to be able to quantify these terrain surfaces. Knowledge of the terrain can be used in combination with vehicle models in order to predict force loads the vehicles would experience while driving over the terrain surface. �This is useful in vehicle design, as it can speed the design process through the use of simulation as opposed to prototype construction and durability testing. �Additionally, accurate terrain maps can be used by highway engineers and maintenance personnel to identify deterioration in road surface conditions for immediate correction. �Repeated measurements of terrain surfaces over an extended length of time can also allow for long term pavement health monitoring.
Many systems have been designed to measure terrain surfaces, most of them historically single line profiles, with more modern equipment capable of capturing three dimensional measurements of the terrain surface. �These more modern systems are often constructed using a combination of various sensors which allow the system to measure the relative height of the terrain with respect to the terrain measurement system. �Additionally, these terrain measurement systems are also equipped with sensors which allow the system to be located in some global coordinate space, as well as the angular attitude of that system to be estimated. �Since all sensors return estimated values, with some uncertainty, the combination of a group of sensors serves to also combine their uncertainties, resulting in a system which is less precise than any of its individual components. �In order to predict the precision of the system, the individual probability densities of the components must be quantified, in some cases transformed, and finally combined in order to predict the system precision. �This thesis provides a proof-of-concept as to how such an evaluation of final precision can be performed.

Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Jaralla, Reem Abdulla. "Optimal design for Bayesian linear hierarchical models with measurement error." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Shryane, Nick. "Human error in the design of a safety-critical system." Thesis, University of Hull, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Shin, In Jae. "Development of a theory-based ontology of design-induced error." Thesis, University of Bath, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ramsey, Jamie L. "Phase optimised general error diffusion for diffractive optical component design." Thesis, University of Strathclyde, 2013. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=22722.

Full text
Abstract:
Algorithms for developing Diffractive Optical Elements (DOEs) are improved to achieve real time holograms capable of switching at rates of 25frames/second or greater. A Phase Optimised General Error Diffusion (POGED) algorithm optimised for quality and speed of generation of diffractive elements is the main contribution of the research. Compared to Simulated Annealing algorithms, a fourfold improvement in the speed of generation is achieved. The algorithm is further enhanced to operate in the Fresnel region with high diffraction efficiency and Signal-to-Noise Ratio (SNR). A number of different target reconstructions are simulated to determine validity and performance of the algorithm. Diffractive optical elements are fabricated to verify performance and a free space optical beam steering application is defined to further validate a DOE generated by POGED. The performance of the diffractive optical elements is proven through the design and characterisation of a free space optical interconnect amenable to harnessing the fast switching speeds of liquid crystal spatial light modulators.
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Sheng. "Error resilient techniques for storage elements of low power design." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/355203/.

Full text
Abstract:
Over two decades of research has led to numerous low-power design techniques being reported. Two popular techniques are supply voltage scaling and power gating. This thesis studies the impact of these two design techniques on the reliability of embedded processor registers and memory systems in the presence of transient faults; and with the aim to develop and validate efficient mitigation techniques to improve reliability with small cost of energy consumption, performance and area overhead. This thesis presents three original contributions. The first contribution presents a technique for improving the reliability of embedded processors. A key feature of the technique is low cost, which is achieved through reuse of the scan chain for state monitoring, and it is effective because it can correct single and multiple bit errors through hardware and software respectively. To validate the technique, ARMR Cortex TM -M0 embedded microprocessor is implemented in FPGA and further synthesised using 65-nm technology to quantify the cost in terms of area, latency and energy. It is shown that the presented technique has a small area overhead (8.6%) with less than 4% worst-case increase in critical path. The second contribution demonstrates that state integrity of flip-flops is sensitive to process, voltage and temperature (PVT) variation through measurements from 82 test chips. A PVT-aware state protection technique is presented to ensure state integrity of flip-flops while achieving maximum leakage savings. The technique consists of characterisation algorithm and employs horizontal and vertical parity for error detection and correction. Silicon results show that flip-flops state integrity is preserved while achieving up to 17.6% reduction in retention voltage across 82-dies. Embedded processors memory systems are susceptible to transient errors and blanket protection of every part of memory system through ECC is not cost effective. The final contribution addresses the reliability of embedded processor memory systems and describes an architectural simulation-based framework for joint optimisation of reliability, energy consumption and performance. Accurate estimation of memory reliability with targeted protection is proposed to identify and protect the most vulnerable part of the memory system to minimise protection cost. Furthermore, L1-cache resizing together with voltage and frequency scaling is proposed for further energy savings while maintaining performance and reliability. The contributions presented are supported by detailed analyses using state-of-the-art design automation tools, in-house software tools and validated using FPGA and silicon implementation of commercial low power embedded processors
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Shaoqiang. "Manufacturing process design and control based on error equivalence methodology." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Davison, Jennifer J. "Response surface designs and analysis for bi-randomization error structures." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-10042006-143852/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

France, Frederick M. "Design of an algorithm for minimizing Loran-C time difference error." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA337399.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering and Electrical Engineer) Naval Postgraduate School, Sept. 1997.
Thesis advisors, Murali Tummala, Roberto Cristi. Includes bibliographical references (p. 191-192). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
24

Yilmaz, Yildiz Elif. "Experimental Design With Short-tailed And Long-tailed Symmetric Error Distributions." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605191/index.pdf.

Full text
Abstract:
One-way and two-way classification models in experimental design for both balanced and unbalanced cases are considered when the errors have Generalized Secant Hyperbolic distribution. Efficient and robust estimators for main and interaction effects are obtained by using the modified maximum likelihood estimation (MML) technique. The test statistics analogous to the normal-theory F statistics are defined to test main and interaction effects and a test statistic for testing linear contrasts is defined. It is shown that test statistics based on MML estimators are efficient and robust. The methodogy obtained is also generalized to situations where the error distributions from block to block are non-identical.
APA, Harvard, Vancouver, ISO, and other styles
25

Lan, Ching Fu. "Design techniques for graph-based error-correcting codes and their applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3329.

Full text
Abstract:
In Shannon’s seminal paper, “A Mathematical Theory of Communication”, he defined ”Channel Capacity” which predicted the ultimate performance that transmission systems can achieve and suggested that capacity is achievable by error-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused by channels afterward. The discovery of turbo codes and rediscovery of Low Density Parity Check codes (LDPC) have revived the research in channel coding with novel ideas and techniques on code concatenation, iterative decoding, graph-based construction and design based on density evolution. This dissertation focuses on the design aspect of graph-based channel codes such as LDPC and Irregular Repeat Accumulate (IRA) codes via density evolution, and use the technique (density evolution) to design IRA codes for scalable image/video communication and LDPC codes for distributed source coding, which can be considered as a channel coding problem. The first part of the dissertation includes design and analysis of rate-compatible IRA codes for scalable image transmission systems. This part presents the analysis with density evolution the effect of puncturing applied to IRA codes and the asymptotic analysis of the performance of the systems. In the second part of the dissertation, we consider designing source-optimized IRA codes. The idea is to take advantage of the capability of Unequal Error Protection (UEP) of IRA codes against errors because of their irregularities. In video and image transmission systems, the performance is measured by Peak Signal to Noise Ratio (PSNR). We propose an approach to design IRA codes optimized for such a criterion. In the third part of the dissertation, we investigate Slepian-Wolf coding problem using LDPC codes. The problems to be addressed include coding problem involving multiple sources and non-binary sources, and coding using multi-level codes and nonbinary codes.
APA, Harvard, Vancouver, ISO, and other styles
26

Sefara, Mamphoko Nelly. "Design of a forward error correction algorithm for a satellite modem." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52181.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: One of the problems with any deep space communication system is that information may be altered or lost during transmission due to channel noise. It is known that any damage to the bit stream may lead to objectionable visual quality distortion of images at the decoder. The purpose of this thesis is to design an error correction and data compression algorithm for image protection, which will allow the communication bandwidth to be better utilized. The work focuses on Sunsat (Stellenbosch Satellite) images as test images. Investigations were done on the JPEG 2000 compression algorithm's robustness to random errors, putting more emphasis on how much of the image is degraded after compression. Implementation of both the error control coding and data compression strategy is then applied to a set of test images. The FEe algorithm combats some if not all of the simulated random errors introduced by the channel. The results illustrates that the error correction of random errors is achieved by a factor of 100 times (xl00) on all test images and that the probability of error of 10-2in the channel (10-4for image data) shows that the errors causes little degradation on the image quality.
AFRIKAANSE OPSOMMING: Een van die probleme met kommunikasie in die ruimte is dat informasie mag verlore gaan en! of gekorrupteer word deur ruis gedurende versending deur die kanaal. Dit is bekend dat enige skade aan die bisstroom mag lei tot hinderlike vervorming van die beelde wat op aarde ontvang word. Die doel van hierdie tesis om foutkorreksie en datakompressie te ontwikkel wat die satelliet beelde sal beskerm gedurende versending en die kommunikasie kanaal se bandwydte beter sal benut. Die werk fokus op SUNSAT (Stellenbosch Universiteit Satelliet) se beelde as toetsbeelde. Ondersoeke is gedoen na die JPEG2000 kompressie algoritme se bestandheid teen toevalsfoute, met klem op hoeveel die beeld gedegradeer word deur die bisfoute wat voorkom. Beide die kompressie en die foutkorreksie is ge-implementeer en aangewend op die toetsbeelde. Die foutkorreksie bestry die gesimuleerde toevalsfoute, soos wat dit op die kanaal voorkom. Die resultate toon dat die foutkorreksie die toevalsfoute met 'n faktor 100 verminder, en dat 'n foutwaarskynlikheid van 10-2 op die kanaal (10-4 op die beelddata) weinig degradering in die beeldkwaliteit veroorsaak.
APA, Harvard, Vancouver, ISO, and other styles
27

Shih, Che-Hua, and 石哲華. "HDL Design Error Diagnosis." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/98192058687103527305.

Full text
Abstract:
碩士
國立交通大學
電子工程系
90
The growing of the modern design complexity leads the design error diagnosis to be a challenge for designers when a mismatch occurs between an implementation in HDL and its design specification. In this thesis, we propose an efficient approach for design error diagnosis automatically. This approach can handle multiple errors occurred in a HDL design simultaneously with only one test case by analyzing the simulation outputs of the incorrect implementation. Furthermore, this approach reduces the error space by eliminating those statements that have no or lower possibility to become the error sources with retaining at least one error source in it. Hence, the effort spent on the debugging process can be reduced. Experiments are conducted over some real designs and the experimental results are very promising with obtaining set of smaller error space.
APA, Harvard, Vancouver, ISO, and other styles
28

Lliu, Ming Yu, and 劉明諭. "Soft error tolerant latch design." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65134675597664163016.

Full text
Abstract:
碩士
長庚大學
電機工程學系
99
With the progress of process technology, transistor density is increased and supply voltage is scaled down, which leads to higher soft error rate. Therefore, reliability issue becomes the main design challenge in IC design. Because latch circuits are more sensitive to soft error, we proposed two kinds of soft error tolerant latches design to enhance their reliability in this thesis. The first design we proposed is an XOR gate based SEU-tolerant latch. Our design is modified from the state-of-art design, FERST. By replacing the C-element existing in the redundant path with XOR gate and added an additional feedback loop on output terminal, which can achieve higher soft error tolerance with lower short circuit power, shorter critical path delay, and lower power-delay product. The second design we proposed is an isolation type soft error tolerant latch based on preservation mechanism. Our proposed preservation mechanism includes preservation block, decision block, and feedback block. In this way, we can achieve information redundant with lower performance overhead. To avoid the internal nodes of C-element affecting by soft error, we applied preservation block and feedback block to increase the critical charge of internal nodes of C-element. In this way, we can lower the SEU rate of the whole system. As a result, we can not only achieve better soft error tolerance but also sacrifice lower power-delay product as compared with the other isolation type SEU tolerant latch designs. In tsmc 90nm process, PDP in our proposed XOR gate based SEU-tolerant latch is 1.12fJ, which improves 39.7% as compared with the FERST design. As applied the proposed XOR gate based SEU-tolerant latch to ISCAS'85 benchmark circuits, the SER improvement is 74.3% comparing with conventional latch. In tsmc 90nm process, PDP in our proposed isolation type soft error tolerant latch is 1.02fJ, which improves 45.1% as compared with the FERST design. As applied the proposed XOR gate based SEU-tolerant latch to ISCAS'85 benchmark circuits, the SER improvement is 58.3% comparing with conventional latch.
APA, Harvard, Vancouver, ISO, and other styles
29

You-ChengHsiao and 蕭侑晟. "Error Compensation Design for Optical Encoders." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/55693703859498686434.

Full text
Abstract:
碩士
國立成功大學
系統及船舶機電工程學系
104
For improving the tolerance of grating miniature optical encoders and meeting the requirement of robots, CNC machines and various equipment needs ultra-high accuracy to the environmental noises in real-time, this study uses three methods to reduce the measurement error caused by dirt, vibration and component assemble misalignment for grating miniature optical encoders. The main ideas behind these methods are using the measured signals which contain phase errors to remove the noises and recover original signals based on the parameters’ solution of a nonlinear system error model formulated by the relationship between input signals and output signals. Based on system model, three methods can be briefly expressed as: 1. Correcting phase errors via inversion method with the collecting input raw data, 2. Using FFT method to calculate spectrum of the input signals, a phase difference can be obtained for correcting the phase error, and 3. Calculating the geometric relationship of Lissajous figure of two input signals based on Pascal’s theorem to search the optimal parameters for the nonlinear system error model to reduce the phase error effective. For verifying the compensation performances, these three methods are simulated in the well-known software platform: Matlab first. The method based on Pascal’s theorem is selected for the realistic implementation due to its simple computational ability. From the simulation results, these three methods reveal almost the same compensation performances. However, the inversion method and FFT method suffer the consequence of computational burden due to their complicated process in practical. Finally, the Pascal’s theorem based method is realized practically.
APA, Harvard, Vancouver, ISO, and other styles
30

徐璠. "Optimal boresight error design of radomes." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/51105471778967556371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ho, Chien-Peng, and 何健鵬. "Efficient Error-Tolerant Design for Scalable Video Transmission over Error-Prone Channels." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/95685076040640021755.

Full text
Abstract:
博士
國立交通大學
資訊科學與工程研究所
100
Due to the growing maturity of broadband services, multimedia streaming systems and peer-to-peer on demand service are gaining vast popularity in recent times. Stable and reliable transmission of multimedia data is becoming increasingly important for multimedia communication over networks subject to packet erasures. To achieve stability and reliability, efficient fault-tolerant and error-resilience methods for multimedia communication are typical analytical and numerical approaches, so as to attain multi-objective performance metrics. The characteristics of video traffic differ substantially from those of traditional data traffic in four ways. First, packet loss is the major cause of nondeterministic distortion on the Internet and may have significant impact on perceptual quality of the streaming video. Second, aggregate bandwidth requirements, supporting video-on-demand services, are still far in excess of the existing communication network infrastructure. Third, although buffering on the client side can provide an opportunity to absorb variations in transmission rates, it is not sufficient to guarantee the service quality of multimedia streams like IPTV (Internet Protocol television) and VoIP (Voice over IP). Finally, most compressed media data are transmitted over lossy and error-prone networks, and a certain degree of quality degradation is tolerable due to noise in some regions which are below the threshold of human visual perception. Thus, video transmission based on scalable coding and unequal error protection codes can be one of the approaches of maintaining acceptable media quality in a network. Error control for video communication and resource allocation in peer-to-peer multimedia systems remain open issues and are the focus of this work. In this thesis, we built scalable, error-resilient, and high-performance multimedia frameworks to adapt to changing network conditions. We developed a framework of fine-level packetization schemes for the streaming of 3D wavelet-based video content over lossy packet networks. An adaptive fine-granularity unequal error protection algorithm was proposed to allow a tradeoff between rate and distortion, and jointly adopt scalable source coding rates and the level of FEC protection. Experimental results show that the proposed framework strikes a fine balance between the reconstructed video quality and the level of error protection under time-varying lossy channels. In the study of P2P video streaming, we developed a replication strategy to optimize resource allocation based on the video-distortion technique for unstructured P2P overlay networks. Failure recovery action can be accomplished by distributing high quality impact and popular replicas to regions of low peers density or discontinuous areas. The results demonstrated the efficiency and robustness of the proposed method at compensating for network-induced errors, and the framework can be applied at a range of different scales of free-riding peers. Moreover, the proposed algorithm was able to handle the load imposed on the system efficiently and improved average visual quality in the overall system.
APA, Harvard, Vancouver, ISO, and other styles
32

Kao, Tien-Tsai, and 高天財. "Error Tolerability Analysis and Error-Tolerant Design Investigation of A JPEG2000 Image Encoder." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/90365775626282384297.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
101
JPEG2000 is a new image compression standard formulated by Joint Photographic Experts Group in 2000. There are two modes for image compression in JPEG2000 standard: lossless compression mode and lossy compression mode. Compare with JPEG image compression standard, JPEG2000 can achieve higher compression ratio with the same quality of compressed images, and no blocking artifacts will be generated after images are compressed. Due to the shrinking of transistors, the problems of low yield, low reliability and short lifetime become more serious. Conventional test methods that do not consider human insensitivity to minor noises in audio or video signals may discard many electronic products containing some manufacturing defects. Error-tolerance, which aims to identify not only defect-free chips but also acceptable ones from the discarded parts by the conventional test methodologies, is a promising method to improve the effective yield of chips. In this thesis we analyzed the effects of defects in JPEG2000 image compression circuit on the image quality. Here we focus on the arithmetic computation circuitry in a JPEG 2000 encoder design, namely the discrete wavelet transform and quantization modules. We inject faults in these two parts and then carefully discuss the resulting fault effects in terms of PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity). The experimental results show that some chips with defects will be accepted and we classified them to some different levels for applications. We also provided some re-design suggestions so that will cost down and raising yield.
APA, Harvard, Vancouver, ISO, and other styles
33

Hsu, Cheng-Chih, and 許正治. "Design and error analysis of diffractive elements." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/80006703676391084569.

Full text
Abstract:
碩士
中原大學
物理學系
87
The design of the hybrid element, which comprising a conventional refractive surface and a surface relief diffractive structure, is a new technique. The potential advantage of hybrid elements, which with high diffraction efficiency and various of the design parameters, make it become a powerful technique today. The precision in manufacturing of the diffractive profile is curcial required, since the efficiency will be reduced with the manufacturing errors. Therefore, the error analysis, as well as design, is discussed in this paper. At last, a single diffractive element is designed and used to replace the traditional laser disk lens. The image qualities of this diffractive element are better than the traditional one. In the future, the diffractive element could be widely used in some other optical system.
APA, Harvard, Vancouver, ISO, and other styles
34

Varatkar, Girish Vishnu. "Energy-efficient and error-tolerant digital design /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3314923.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2008.
Source: Dissertation Abstracts International, Volume: 69-05, Section: B, page: 3194. Adviser: Naresh R. Shanbhag. Includes bibliographical references (leaves 99-105) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
35

Yin, Yu-Fan, and 尹煜帆. "Error Candidate Reduction in Automated Design Debugging." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/28740053744870269145.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
100
Given an erroneous design, functional verification returns an error trace containing a mismatch between the specification and the implementation of a design. Automated design debugging utilizes this error trace to identify candidates causing the error. There are remarkable debugging works in handling large designs and long error traces. However, the quality of error candidates remains poor, and it’s hard for designers to locate the actual error source among hundreds or thousands of candidates. This thesis proposes a two-stage debugging framework that reduces error candidate number. The first stage performs conventional debugging algorithm to get initial error candidates. In the second stage, alternative test sequences are generated by error injection, state selection, and error propagation path differentiation techniques. Then, the alternative test sequences are validated to produce alternative error traces. After debugging, redundant candidates can be removed if they are not in the intersection of the original candidate set and the new candidate set. Experimental results show that the proposed algorithm is able to reduce more than 75% error candidates, which demonstrates the viability of this approach in improving design debugging techniques.
APA, Harvard, Vancouver, ISO, and other styles
36

Chien, Po-Hao, and 簡伯豪. "Soft-Error Resilient SRAM by Error-Correction Code Design and Implementation for Satellite Application." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/94750821450233630904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Ming. "Analysis and design of soft-error tolerant circuits /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223763.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4023. Adviser: Naresh R. Shanbhag. Includes bibliographical references (leaves 105-111) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
38

Peng, Chien Chang, and 彭建彰. "Design of Soft Error Tolerant Circuits and Architectures." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/4qfbyx.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Chu-Wen, and 吳主文. "VLSI Design of Timing-Error-Resilient Sorting Hardware." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/frn692.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
104
As the feature size of chips shrinks with advance of semiconductor technology, the size of transistors and their operating voltage keep decreasing. One of the major problems with advanced semiconductor technology is timing errors caused by variation of process, supply voltage, temperature (PVT). Furthermore, device aging can also cause timing errors. With such problems, conventional worst-case designs suffer poor system performance. When timing errors happen, the computing result from the integrated circuit is incorrect. Although we can employ worst-case frequency for insuring correctness, the performance is sacrificed significantly. Hence, design of timing-error-resilient VLSI circuit with aggressive design approach is more and more important. This thesis proposes a technique for aggressive VLSI design of timing-error resilient sorting hardware, which has an error detection and fault tolerance function. Even if the circuit timing errors occur, the design can still operate and produce the correct output. Although this design is at the expense of a small amount of chip area cost and power consumption, it can achieve reliability for the circuit, while keeps high performance. We have applied the technique to three sorting algorithms, including Bubble Sort, Odd-Even Sort, and Bitonic Sort. Two versions of the sorting hardware are implemented for the three sorters for comparison: one is original sorting hardware and the other is with timing-error-tolerant capability. The implementation results show that our proposed designs achieve tolerance of timing errors with reasonable cost.
APA, Harvard, Vancouver, ISO, and other styles
40

Peng, Shih-Lun, and 彭士倫. "A Soft Error Tolerant D Flip-Flop Design." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/56202763496143267495.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
100
In recent years, soft error problem is an important reliability issue. Soft errors cause a severe problem especially for memories or flip-flops. When various particles strike on the device, a transient pulse occurs. If this transient pulse flips the data stored in the memory or the flip-flop, a soft error occurs. Soft error can be classified into Single Event Transient (SET), which occurs in the combinational logic; and Single Event Upset (SEU), which occurs in memory elements like flip-flop. Researches propose many designs to tolerate soft error in the flip-flop. However, these designs usually have large performance penalty and occupy large area overhead, so that they are not useful in simple circuits. In this thesis, we propose a new flip-flop:Soft Error Tolerant D Flip-Flop (SETDFF) to tolerate SEU, and has some SET tolerate ability. This SETDFF cell achieves the same performance with general D flip-flop and occupies less area overhead. Our SETDFF design use C-element to tolerate soft error: two inputs of the C-element come from different signals with the same value. When SEU or SET occurs, inputs of the C-element disagree with each other, and C-element rejects the transient pulse and blocks the soft error. Experiment results show that, compared with the BISER architecture proposed by other paper, our SETDFF design achieves similar SEU tolerate ability and better SET tolerate ability. At the same time, our design has 13% and 70% less performance and area overhead penalty. In our SETDFF design, SER has relationship with input arrival time. Therefore, we propose Soft Error Tolerant Time (SETT). The concept is that as long as the data arrives in a time period, we guarantee the SER under the threshold, and this time period is SETT. Circuit designers can achieve the balance between performance and SER according to this information.
APA, Harvard, Vancouver, ISO, and other styles
41

Yen, Chia-Chih, and 顏嘉志. "Algorithms for Efficient Design Error Detection and Diagnosis." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/06604751304421872000.

Full text
Abstract:
博士
國立交通大學
電子工程系所
93
Functional verification now accounts for most of the time spent in the product development due to the increasing complexity of modern ASIC and SoC designs. Assertion based verification (ABV) helps design teams identify and debug design errors more quickly than traditional techniques. It compares the implementation of a design against its specified assertions by embedding assertions in a design and having them monitor design activities. As a result, ABV is recognized as a critical component of the design verification process. In general, detecting and diagnosing design errors play the most important role in ABV. Unfortunately, the proposed techniques for design error detection cannot keep up with the rapid growth of design complexity. Furthermore, the generated error traces are usually very lengthy such that diagnosing counterexamples becomes very tedious and difficult. In this dissertation, we focus on three strategies that address the problem of efficiently detecting design errors and easing debug process. We first propose a practical cycle bound calculation algorithm for guiding bounded model checking (BMC). Many reports have shown the effectiveness of BMC in design error detection; however, the flaw of BMC is that it needs a pre-computed bound to ensure completeness. To determine the bound, we develop a practical ap-proach that takes branch-and-bound manner. We reduce the search space by applying a partitioning as well as a pruning method. Furthermore, we propose a novel formula-tion and use the SAT-solver to search states and thus determine the cycle bound. Ex-perimental results show that our algorithm considerably enhances the performance compared with the results of the previous work. We then propose an advanced semi-formal verification algorithm for identifying hard-to-detect design errors. Generally, semi-formal verification combines the simula-tive and formal methods to tackle the tough verification problems in a real industrial environment. Nevertheless, the monotonous cooperation of the heterogeneous meth-ods cannot keep pace with the rapid growth of design complexity. Therefore, we pro-pose an elegant algorithm that takes divide-and-conquer paradigm to orchestrate those approaches. We introduce a partitioning technique to recursively divide a design into smaller-sized components. Moreover, we present the approaches of handling each divided component efficiently while keeping the entire design function correct. Ex-perimental results demonstrate that our strategy indeed detects much more design errors than the traditional methods do. At last, we propose powerful error trace compaction algorithms to ease design error diagnosis. Usually, the error traces used to exercise and observe design behavior are very lengthy such that designers need to spend considerable effort to understand them. To alleviate designers’ burden, we present a SAT-based algorithm for reducing the lengths of the error traces. The algorithm not only halves the search space recur-sively but also guarantees to acquire the shortest lengths. Based on the optimum algo-rithm, we also develop two robust heuristics to handle real designs. Experimental results indicate that our approaches greatly surpass previous work and certainly give the promising results.
APA, Harvard, Vancouver, ISO, and other styles
42

Liao, Jing-Cheng, and 廖經晟. "Error Diffusion Kernel Design to Improve Halftone Quality." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/04564995140133948501.

Full text
Abstract:
碩士
國立聯合大學
電子工程學系碩士班
94
The image quality of error-diffusion(ED)halftone can be improved by estimating better error diffusion coefficients. In this paper, we propose two algorithms to get better halftone quality, one is modifying the quantizer threshold in ED to improve halftone, and the other is the ED model fitting of direct binary search(DBS) halftone. In the modifying of the quantizer threshold in ED to improve halftone, we use the information of previous quantizer error, adjust it by constant modification, and add the result to the quantizer input for getting better halftone quality. Additionally, we adopt the IBLS algorithm to estimate an ED kernel, which will yield similar halftone when it is used in the ED process. In the ED model fitting of DBS, we utilize the halftone that produced by the DBS process, adjust the error signal of ED process by iterative comparison, and finally using IBLS to estimate the ED coefficient. The experimental results show apparently that the ED coefficients so obtained can resolve special problems of ED. For example, directional lines etc. Furthermore, we also produce a series of ED kernels estimated from differential resolutions of halftones to form a table, which cab be used for halftoning in different resolutions via simply table lookup.
APA, Harvard, Vancouver, ISO, and other styles
43

Peng, Guan-Lin, and 彭冠霖. "VLSI Design of a Timing-Error-Tolerant Microprocessor." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/05087571276420541666.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
104
Due to the ever advancing of semiconductor technology, the size of transistors and their operating voltage keep decreasing. Hence, the problems of wires and circuits being susceptible to noise, wire delay, and soft errors are getting worse and worse. One of the challenges we are faced with is timing errors of the circuits. Timing errors may happen when transmitted data arrive later than the timing clock or simply has not enough setup time. With VLSI circuit in advanced manufacturing process, timing errors either result in reduced operational reliability of circuits, or we have to tolerate the much slower clock pessimistically. One of the solutions to these problems is error-resilient design. Error-resilient design of VLSI circuits can detect and even correct errors. Such design is even more important for advanced microprocessors in modern technology for many applications in recently years.   This master thesis employs timing-error-tolerant circuits in our designed 5-stage pipelined microprocessor with a 32-bit reduced MIPS instruction set. We implement our design using the cell-based IC design flow with Verilog hardware description language. We have run extensive simulation to validate the timing-error-tolerant capability. We then use logic synthesis to generate the circuit, and static timing analysis to verify that the timing delay meets our goal. The final step is to do automatic place and route, physical verification. We measure the cost we have to pay for the timing-error-tolerant capability. It is shown that our design can improve test coverage of the microprocessor with a reasonable cost.
APA, Harvard, Vancouver, ISO, and other styles
44

Luo, Wen-Hua, and 駱文華. "Chip Design of a Burst-Error-Correcting Viterbi Decoder." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/94074650062852055656.

Full text
Abstract:
碩士
國立臺北科技大學
電腦通訊與控制研究所
89
Abstract The thesis proposes a chip design of burst-error-correcting Viterbi decoder. The decoder can be applied to both random and burst error channels. Firstly, We propose two new algorithms. One is a burst-error-alarm algorithm that is employed to detect the burst errors and another is burst-error-recovery algorithm that uses the recovery circuit to replace the data mixed with burst error. And then send the recovery information to the Viterbi decoder again to correct the random error in the recovery information. To implement above algorithms, we use 0.35μm 1P4M Silicide to design a (2,1,7)(Q=8) burst-error-correcting Viterbi decoder chip. This decoder are composed of three circuits blocks. One is a (2,1,7) Viterbi decoder, one is burst-error-alarm/ burst-error-recovery circuit and the other is control circuit. The chip contains 630K transistors and occupies the area of 3.0 mm by 3.1 mm. Experimental results show that the decoder can achieve the decode rates of 106Mb/s on random error and 69.8Mb/s on burst error under 3.3 V. And the decoder has a net coding gain of 3.6 db at 10-4 bit error rate. Moreover, since we use modular and hierarchy design methodology, we can easily extend our architecture to higher bits VA decoder that will fit the requirement of modern communication system.
APA, Harvard, Vancouver, ISO, and other styles
45

Chang, Ming-Da, and 張鳴達. "A Parallel Error Tolerance System Design for JPEG2000 Applications." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/u6mgq3.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
96
Recently, it’s going to have to provide both high resolution and high quality in multimedia applications, not only JPEG but also JPEG2000 which was bring forth in 2000 are applications that particularly can serve high quality images at low bit-rate. JPEG2000 is the next generation of still image compression system standard, and it is designed for a broad range of data compression applications. The new standard is based on wavelet technology and layered coding in order to provide a rich feature compressed image stream. Discrete wavelet transform (DWT) is the main part of JPEG2000 standard, it uses the lifting structure to achieve forward and reverse transform. After discrete wavelet transform, signals can be split into different subbands, which include information of time and frequency, and they can provide another way to analysis signals. However, the implementations of JPEG2000 codec are susceptible to computer-induced soft errors. And because of that, it becomes an important issue to design a tolerance system with testability. This paper draws up a system which focuses on image compression system JPEG 2000 to design a real-time and efficient fault tolerance with pipeline architecture. It plans to detect and testing the discrete wavelet transform which is the most important part in image compression system JPEG2000 in order to develop the mathematical proof which can’t implement to a hardware or circuit previously. Beside, this paper develops the mathematical proof and uses the original transform processing in discrete wavelet transform to coordinate tolerance weighting and fault tolerance with pipeline architecture and to perform the testing of discrete wavelet transform and thus, when the discrete wavelet transform processing, it can achieve the real-time fault tolerance testing. This system aims at the occurrence of soft error in hardware circuit and the occurrence of soft error in computer-induced to perform testing and fault tolerance. The paper will propose a 「A Parallel Error Tolerance System Design for JPEG2000 Applications」will concentrate on analyzing the fault effect in discrete wavelet transform and utilizing the fault tolerance with pipeline architecture testing for improving the reliability of image compression system JPEG2000 and providing the high quality image.
APA, Harvard, Vancouver, ISO, and other styles
46

Hsiao, Wen-Rue, and 蕭文瑞. "Adaptive Compensator Design for Missile Radome Refraction Slope Error." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/39740344454342902219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lu, Li-Yu, and 盧力瑀. "Design and Implementation of Error Rollback Mechanism for MapReduce." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/06903390387172592623.

Full text
Abstract:
碩士
樹德科技大學
資訊工程系碩士班
101
With the vigorous development of the Internet, the Internet access has become part of people’s lives. Therefore, it is important for the using huge resources on the internet effectively. The traditional computing is not applicable to the great amount of data nowadays. Consequently, accomplishing the data by using the discrete-computing architecture has come to be the trend. This research confers the platform of the open source software - Apache Hadoop which is using the parallel-computing architecture. When the default architecture, Hadoop MapReduce , comes about the condition of JobTracker Fail ,it will cause the computing disconnection. Besides, when the JobTracker rollback, the Hadoop can’t return the place which affected computing disconnection last time. Therefore, this research brings up the framework which is using the Queue and the Memcache. Because of that, when we face the JobTracker Fail, we can still return to the original state.
APA, Harvard, Vancouver, ISO, and other styles
48

Pieterse, Hein. "Towards guidelines for error message design in digital systems." Diss., 2016. http://hdl.handle.net/2263/57499.

Full text
Abstract:
A part of many digital system is the display of error messages. The research aims to create a set of guidelines for error message design in digital systems. These guidelines will enable designers and developers to create better error messages that convey the right information at the right time and in the right way. In other words, to create error messages that are necessary and effective. The first step in the process of generating this set of guidelines was to perform a literature review in order to find existing theory that is applicable to the design of error messages. The literature review also includes research on warning design theory. The assumption is that warnings are, to some extent, similar to error messages. Therefore, the research surrounding it is also expected to be applicable to error messages. The use of warning design theory stems from the lack of research on error messages and the comparable richness of the body of knowledge on warnings. From this literature review it was possible to propose a set of guidelines for error message design. The initially proposed guidelines were evaluated by performing two usability studies on an existing Internet banking website. The first usability study involved a heuristic evaluation of some of the error messages in the website, using the guidelines as heuristics. The second usability study entailed conducting individual interviews with representative users where the same error messages used in the heuristic evaluation was evaluated. The results of the heuristic evaluation were then used to determine whether the guidelines are effective. The effectiveness of a guideline is an indication of whether experts can easily use it to analyse error messages and detect possible usability problems. The results of the individual interviews were used to determine whether the proposed guidelines are valid. The validity of the guidelines is a measure of how well the guidelines and the suggestions raised by using them, reflect the pain points and concerns of users. The results of both these usability studies were also compared to one another to get a further indication of the effectiveness and validity of the guidelines. From this analysis some changes and additions were made to the initially proposed guidelines. These updates are expected to increase the effectiveness and validity of the guidelines compared to the initially proposed guidelines. In other words, the updates are expected to make the guidelines easier to use and to enable experts to find usability problems that are a closer match to the concerns of users. The research followed the design science research methodology, but only completing one iteration of the process. Subsequent iterations that will further refine the proposed guidelines is left for future research.
Dissertation (MIT)--University of Pretoria, 2016.
tm2016
Informatics
MIT
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
49

Ye, Jian-Hong, and 葉建宏. "Mechanical Design and Error Compensation for a Carpentry Machine." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/90757538120339371530.

Full text
Abstract:
碩士
中原大學
機械工程研究所
104
This paper designed a carpentry machine and analyzed its processing path. The carpentry machine is designed as a gantry platform. The gantry platform is composed by two servo motors to drive the cutting tools moving in X and Y directions. In the Z axis of the gantry platform, a hacksaw machine and a drilling machine are installed to cut and drill. When the carpentry machine cutting wood, it will generate cutting vibration, tool wear and disturbance, etc., to produce some contour error. Thus the preset path is not the same as the desired path and will reduce machining accuracy. To resolve the contour error referred to the lack of coordination on multi-axis motion and errors generation between motion path and processing path, this study used the cross-coupled control (CCC) method to compensate and reduce contour error that made the actual cutting path could be approximated to the preset path.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Yu-Wen, and 張玉雯. "High-Speed VLSI Architecture Design for Error Correcting Codes." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/51505583478216919282.

Full text
Abstract:
博士
義守大學
電機工程學系博士班
94
In this dissertation, we devoted to study the Reed-Solomon (RS) codes and Huffman code for proposing some novel improvement decoding algorithms, which are developed to simplify the implementation of software and hardware for RS codes and Huffman code. First, in Chapter 3, a modified Euclidean decoding algorithm to solve the Berlekamp’s key equation for correcting errors only is presented. It is derived to solve the error locator and error evaluator polynomials simultaneously without performing the operations of polynomial division and field element inversion. In this proposed algorithm, the number of iterations used to solve the equation is fixed, and also the weights used to reduce the degree of the error evaluator polynomial at each iteration can be extracted from the coefficient of fixed position. Therefore, this proposed algorithm saves many controlling circuits, and provides module architecture with regularity. Next, in Chapter 4, a modified decoding algorithm, which is used to correct both errors and erasures in an RS decoder is proposed to improve the idea of Eastman developed. It means that the errata locator and the errata evaluator polynomials can be obtained simultaneously by initializing the Euclideanized BM algorithm with the erasure locator polynomial and the Forney syndromes. On the other hand, this modified algorithm is compared to the modified inverse-free BM algorithm proposed by Truong et al. through a software simulation using C++ language and a hardware simulation using hardware description language. An illustrative example of (255, 239) RS code shows that the speed of the modified inverse-free BM algorithm is still faster than that of the modified Euclideanized BM algorithm in software simulation. However, the modified Euclideanized BM algorithm has lower delay because of the pipeline structure in hardware implementation. Moreover, the VLSI architectures of these proposed algorithms are also developed. There are two main computation units for the VLSI architecture of these proposed algorithms and each unit is constructed using only one simple type of processing element (PE). Therefore, these architectures are much simpler, modular and regular, and as such can be easily configured for various applications. Finally, in Chapter 5, a direct mapping technique (DMT) based on the concepts of the pattern partition and canonical codes for a JPEG Huffman decoder is proposed. There are two merits: First, the run/size symbols and codeword lengths can be obtained directly. Next, it is not necessary to store any AC Huffman codewords in the memory space at the decoding end. In other words, it is a one-time-search to obtain the corresponding symbol and codeword length. Thus the decoding speed will be faster. In addition, the storage usage is more effective because the AC Huffman codewords are not required to be stored in the memory space at the decoding end. Furthermore, a VLSI architecture of this proposed algorithm is also developed and included.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography