Dissertations / Theses on the topic 'Design error'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Design error.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tarnoff, David. "Episode 8.01 – Intro to Error Detection." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/57.
Full textBastos, Rodrigo Possamai. "Design of a soft-error robust microprocessor." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/8127.
Full textThe advance of the IC technologies raises important issues related to the reliability and robustness of electronic systems. The transistor scale by shrinking its geometry, the voltage reduction, the lesser capacitances and therefore smaller currents and charges to supply the circuits, besides the higher clock frequencies, have made the IC more vulnerable to faults, especially those faults caused by electrical noise or radiationinduced effects. The radiation-induced effects known as Soft Single Event Effects (Soft SEEs) can be classified into: direct Single Event Upsets (SEUs) at nodes of storage elements that result in bit flips; and Single Event Transient (SET) pulses at any circuit node. Especially SETs on combinational circuits might propagate itself up to the storage elements and might be captured. These erroneous storages can be also called indirect SEUs. Faults like SETs and SEUs can provoke errors in functional operations of an IC. The known Soft Errors (SEs) are characterized by values stored wrongly on memory elements during the use of the IC. They can make serious consequences in IC applications due to their non-permanent and non-recurring nature. By these reasons, protection mechanisms to avoid SEs by using fault-tolerance techniques, at least in one abstraction level of the design, are currently fundamental to improve the reliability of systems. In this dissertation work, a fault-tolerant IC version of a mass-produced 8-bit microprocessor from the M68HC11 family was designed. It is able to tolerate SETs and SEUs. Based on the Triple Modular Redundancy (TMR) and Time Redundancy (TR) fault-tolerance techniques, a protection scheme was designed and implemented at high level in the target microprocessor by using only standard logic gates. The designed scheme preserves the standard-architecture characteristics in such way that the reusability of microprocessor applications is guaranteed. A typical IC design flow was developed by means of commercial CAD tools. Functional testing and fault injection simulations through benchmark executions were performed as a design verification testing. Furthermore, fault-tolerant IC design issues and results in area, performance and power were compared with a non-protected microprocessor version. The core area increased by 102.64 % to protect the target circuit against SETs and SEUs. The performance was degraded in 12.73 % and the power consumption grew around 49 % for a set of benchmarks. The resulting area of the robust chip was approximately 5.707 mm².
Herman, Eric. "Efficient Error Analysis Assessment in Optical Design." Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/321608.
Full textYankopolus, Andreas George. "Adaptive Error Control for Wireless Multimedia." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5237.
Full textLing, Xiang. "Adaptive design in dose-response studies." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133365136.
Full textMeyer, Jan. "Textile pressure sensor : design, error modeling and evaluation /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=18050.
Full textLeeke, Matthew. "Towards the design of efficient error detection mechanisms." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/52394/.
Full textAltice, Nathan. "I Am Error." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/405.
Full textGarufi, David (David J. ). "Error propagation in concurrent product development." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118550.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (page 68).
System dynamics modelling is used to explore varying levels of concurrency in a typical design-build-produce project introducing a new product. Faster product life-cycles and demanding schedules have introduced the importance of beginning downstream work (build/manufacturing) while upstream work (design) is incomplete. Conceivably, this project concurrency improves project schedule and cost by forcing rework to be discovered and completed earlier in the project life. Depending on the type of project, some design errors may only be discoverable once the build phase has begun its work. Namely, systemic errors and assembly errors that cannot be easily discovered within the design phase. Pushing build activity earlier in the project allows the rework to be discovered earlier in the project, shortening the overall effort required to complete the project. A mathematical simulation, created using Vensim@ system modeling software, was created by James Lyneis to simulate two-phase rework cycles. The model was tuned to match data based on a disguised real project. Various start dates (as a function of project percentage complete) for downstream phases were explored to find optimal levels of concurrency. Project types were varied by exploring three levels of "rework discoverable within the design phase" to cover a range of project types. The simulation found that for virtually all project types, significant schedule and effort benefits can be gained by introducing the downstream phase as early as 30% to 40% into the project progress and ramping downstream effort over an extended period of time.
by David Garufi.
S.M. in Engineering and Management
Mathew, Jimson. "Design techniques for low power on-chip error correction." Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492442.
Full textYang, Christopher Chuan-Chi 1968. "Active vision inspection: Planning, error analysis, and tolerance design." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282424.
Full textLloyd, Jeffrey (Jeffrey M. ). "Error propagation of optimal system design in a hierarchical enterprise." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/43096.
Full textIncludes bibliographical references (p. 62-63).
Increased computing power has helped virtual engineering become common practice amongst product development firms. However, while capabilities increase, the desire to simulate even larger systems has increased as well. To deal with the complexity and size of these systems, several techniques have been developed to decompose the system into smaller, more tractable subsystems. The drawback of this approach is a substantial decrease in computational efficiency. Therefore the use of simplified models is encouraged and often required to reach convergence.In this thesis, a test model is introduced where different forms of error can be introduced at each level. Error derived from both measurement inaccuracy and modeling inaccuracy is examined coupled with the effect of system constraints as well. A hierarchical decomposition method is selected for its similarity to a typical enterprise organizational structure. In this manner, the results of the examination should be applicable to both system engineering methods and enterprise level problems. The direction of error propagation within the hierarchical decomposition is determined and the effects of robust design considerations and simple system constraints are revealed.
by Jeffrey Lloyd.
S.M.
Feng, Chi S. M. Massachusetts Institute of Technology. "Optimal Bayesian experimental design in the presence of model error." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97790.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 87-90).
The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction. We propose an information theoretic framework and algorithms for robust optimal experimental design with simulation-based models, with the goal of maximizing information gain in targeted subsets of model parameters, particularly in situations where experiments are costly. Our framework employs a Bayesian statistical setting, which naturally incorporates heterogeneous sources of information. An objective function reflects expected information gain from proposed experimental designs. Monte Carlo sampling is used to evaluate the expected information gain, and stochastic approximation algorithms make optimization feasible for computationally intensive and high-dimensional problems. A key aspect of our framework is the introduction of model calibration discrepancy terms that are used to "relax" the model so that proposed optimal experiments are more robust to model error or inadequacy. We illustrate the approach via several model problems and misspecification scenarios. In particular, we show how optimal designs are modified by allowing for model error, and we evaluate the performance of various designs by simulating "real-world" data from models not considered explicitly in the optimization objective.
by Chi Feng.
S.M.
Tarnoff, David. "Episode 7.06 – Stupid Binary Tricks." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/56.
Full textRainey, Cameron Scott. "Error Estimations in the Design of a Terrain Measurement System." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50501.
Full textMany systems have been designed to measure terrain surfaces, most of them historically single line profiles, with more modern equipment capable of capturing three dimensional measurements of the terrain surface. �These more modern systems are often constructed using a combination of various sensors which allow the system to measure the relative height of the terrain with respect to the terrain measurement system. �Additionally, these terrain measurement systems are also equipped with sensors which allow the system to be located in some global coordinate space, as well as the angular attitude of that system to be estimated. �Since all sensors return estimated values, with some uncertainty, the combination of a group of sensors serves to also combine their uncertainties, resulting in a system which is less precise than any of its individual components. �In order to predict the precision of the system, the individual probability densities of the components must be quantified, in some cases transformed, and finally combined in order to predict the system precision. �This thesis provides a proof-of-concept as to how such an evaluation of final precision can be performed.
Master of Science
Al-Jaralla, Reem Abdulla. "Optimal design for Bayesian linear hierarchical models with measurement error." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248202.
Full textShryane, Nick. "Human error in the design of a safety-critical system." Thesis, University of Hull, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418987.
Full textShin, In Jae. "Development of a theory-based ontology of design-induced error." Thesis, University of Bath, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516953.
Full textRamsey, Jamie L. "Phase optimised general error diffusion for diffractive optical component design." Thesis, University of Strathclyde, 2013. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=22722.
Full textYang, Sheng. "Error resilient techniques for storage elements of low power design." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/355203/.
Full textChen, Shaoqiang. "Manufacturing process design and control based on error equivalence methodology." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002511.
Full textDavison, Jennifer J. "Response surface designs and analysis for bi-randomization error structures." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-10042006-143852/.
Full textFrance, Frederick M. "Design of an algorithm for minimizing Loran-C time difference error." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA337399.
Full textThesis advisors, Murali Tummala, Roberto Cristi. Includes bibliographical references (p. 191-192). Also available online.
Yilmaz, Yildiz Elif. "Experimental Design With Short-tailed And Long-tailed Symmetric Error Distributions." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605191/index.pdf.
Full textLan, Ching Fu. "Design techniques for graph-based error-correcting codes and their applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3329.
Full textSefara, Mamphoko Nelly. "Design of a forward error correction algorithm for a satellite modem." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52181.
Full textENGLISH ABSTRACT: One of the problems with any deep space communication system is that information may be altered or lost during transmission due to channel noise. It is known that any damage to the bit stream may lead to objectionable visual quality distortion of images at the decoder. The purpose of this thesis is to design an error correction and data compression algorithm for image protection, which will allow the communication bandwidth to be better utilized. The work focuses on Sunsat (Stellenbosch Satellite) images as test images. Investigations were done on the JPEG 2000 compression algorithm's robustness to random errors, putting more emphasis on how much of the image is degraded after compression. Implementation of both the error control coding and data compression strategy is then applied to a set of test images. The FEe algorithm combats some if not all of the simulated random errors introduced by the channel. The results illustrates that the error correction of random errors is achieved by a factor of 100 times (xl00) on all test images and that the probability of error of 10-2in the channel (10-4for image data) shows that the errors causes little degradation on the image quality.
AFRIKAANSE OPSOMMING: Een van die probleme met kommunikasie in die ruimte is dat informasie mag verlore gaan en! of gekorrupteer word deur ruis gedurende versending deur die kanaal. Dit is bekend dat enige skade aan die bisstroom mag lei tot hinderlike vervorming van die beelde wat op aarde ontvang word. Die doel van hierdie tesis om foutkorreksie en datakompressie te ontwikkel wat die satelliet beelde sal beskerm gedurende versending en die kommunikasie kanaal se bandwydte beter sal benut. Die werk fokus op SUNSAT (Stellenbosch Universiteit Satelliet) se beelde as toetsbeelde. Ondersoeke is gedoen na die JPEG2000 kompressie algoritme se bestandheid teen toevalsfoute, met klem op hoeveel die beeld gedegradeer word deur die bisfoute wat voorkom. Beide die kompressie en die foutkorreksie is ge-implementeer en aangewend op die toetsbeelde. Die foutkorreksie bestry die gesimuleerde toevalsfoute, soos wat dit op die kanaal voorkom. Die resultate toon dat die foutkorreksie die toevalsfoute met 'n faktor 100 verminder, en dat 'n foutwaarskynlikheid van 10-2 op die kanaal (10-4 op die beelddata) weinig degradering in die beeldkwaliteit veroorsaak.
Shih, Che-Hua, and 石哲華. "HDL Design Error Diagnosis." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/98192058687103527305.
Full text國立交通大學
電子工程系
90
The growing of the modern design complexity leads the design error diagnosis to be a challenge for designers when a mismatch occurs between an implementation in HDL and its design specification. In this thesis, we propose an efficient approach for design error diagnosis automatically. This approach can handle multiple errors occurred in a HDL design simultaneously with only one test case by analyzing the simulation outputs of the incorrect implementation. Furthermore, this approach reduces the error space by eliminating those statements that have no or lower possibility to become the error sources with retaining at least one error source in it. Hence, the effort spent on the debugging process can be reduced. Experiments are conducted over some real designs and the experimental results are very promising with obtaining set of smaller error space.
Lliu, Ming Yu, and 劉明諭. "Soft error tolerant latch design." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/65134675597664163016.
Full text長庚大學
電機工程學系
99
With the progress of process technology, transistor density is increased and supply voltage is scaled down, which leads to higher soft error rate. Therefore, reliability issue becomes the main design challenge in IC design. Because latch circuits are more sensitive to soft error, we proposed two kinds of soft error tolerant latches design to enhance their reliability in this thesis. The first design we proposed is an XOR gate based SEU-tolerant latch. Our design is modified from the state-of-art design, FERST. By replacing the C-element existing in the redundant path with XOR gate and added an additional feedback loop on output terminal, which can achieve higher soft error tolerance with lower short circuit power, shorter critical path delay, and lower power-delay product. The second design we proposed is an isolation type soft error tolerant latch based on preservation mechanism. Our proposed preservation mechanism includes preservation block, decision block, and feedback block. In this way, we can achieve information redundant with lower performance overhead. To avoid the internal nodes of C-element affecting by soft error, we applied preservation block and feedback block to increase the critical charge of internal nodes of C-element. In this way, we can lower the SEU rate of the whole system. As a result, we can not only achieve better soft error tolerance but also sacrifice lower power-delay product as compared with the other isolation type SEU tolerant latch designs. In tsmc 90nm process, PDP in our proposed XOR gate based SEU-tolerant latch is 1.12fJ, which improves 39.7% as compared with the FERST design. As applied the proposed XOR gate based SEU-tolerant latch to ISCAS'85 benchmark circuits, the SER improvement is 74.3% comparing with conventional latch. In tsmc 90nm process, PDP in our proposed isolation type soft error tolerant latch is 1.02fJ, which improves 45.1% as compared with the FERST design. As applied the proposed XOR gate based SEU-tolerant latch to ISCAS'85 benchmark circuits, the SER improvement is 58.3% comparing with conventional latch.
You-ChengHsiao and 蕭侑晟. "Error Compensation Design for Optical Encoders." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/55693703859498686434.
Full text國立成功大學
系統及船舶機電工程學系
104
For improving the tolerance of grating miniature optical encoders and meeting the requirement of robots, CNC machines and various equipment needs ultra-high accuracy to the environmental noises in real-time, this study uses three methods to reduce the measurement error caused by dirt, vibration and component assemble misalignment for grating miniature optical encoders. The main ideas behind these methods are using the measured signals which contain phase errors to remove the noises and recover original signals based on the parameters’ solution of a nonlinear system error model formulated by the relationship between input signals and output signals. Based on system model, three methods can be briefly expressed as: 1. Correcting phase errors via inversion method with the collecting input raw data, 2. Using FFT method to calculate spectrum of the input signals, a phase difference can be obtained for correcting the phase error, and 3. Calculating the geometric relationship of Lissajous figure of two input signals based on Pascal’s theorem to search the optimal parameters for the nonlinear system error model to reduce the phase error effective. For verifying the compensation performances, these three methods are simulated in the well-known software platform: Matlab first. The method based on Pascal’s theorem is selected for the realistic implementation due to its simple computational ability. From the simulation results, these three methods reveal almost the same compensation performances. However, the inversion method and FFT method suffer the consequence of computational burden due to their complicated process in practical. Finally, the Pascal’s theorem based method is realized practically.
徐璠. "Optimal boresight error design of radomes." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/51105471778967556371.
Full textHo, Chien-Peng, and 何健鵬. "Efficient Error-Tolerant Design for Scalable Video Transmission over Error-Prone Channels." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/95685076040640021755.
Full text國立交通大學
資訊科學與工程研究所
100
Due to the growing maturity of broadband services, multimedia streaming systems and peer-to-peer on demand service are gaining vast popularity in recent times. Stable and reliable transmission of multimedia data is becoming increasingly important for multimedia communication over networks subject to packet erasures. To achieve stability and reliability, efficient fault-tolerant and error-resilience methods for multimedia communication are typical analytical and numerical approaches, so as to attain multi-objective performance metrics. The characteristics of video traffic differ substantially from those of traditional data traffic in four ways. First, packet loss is the major cause of nondeterministic distortion on the Internet and may have significant impact on perceptual quality of the streaming video. Second, aggregate bandwidth requirements, supporting video-on-demand services, are still far in excess of the existing communication network infrastructure. Third, although buffering on the client side can provide an opportunity to absorb variations in transmission rates, it is not sufficient to guarantee the service quality of multimedia streams like IPTV (Internet Protocol television) and VoIP (Voice over IP). Finally, most compressed media data are transmitted over lossy and error-prone networks, and a certain degree of quality degradation is tolerable due to noise in some regions which are below the threshold of human visual perception. Thus, video transmission based on scalable coding and unequal error protection codes can be one of the approaches of maintaining acceptable media quality in a network. Error control for video communication and resource allocation in peer-to-peer multimedia systems remain open issues and are the focus of this work. In this thesis, we built scalable, error-resilient, and high-performance multimedia frameworks to adapt to changing network conditions. We developed a framework of fine-level packetization schemes for the streaming of 3D wavelet-based video content over lossy packet networks. An adaptive fine-granularity unequal error protection algorithm was proposed to allow a tradeoff between rate and distortion, and jointly adopt scalable source coding rates and the level of FEC protection. Experimental results show that the proposed framework strikes a fine balance between the reconstructed video quality and the level of error protection under time-varying lossy channels. In the study of P2P video streaming, we developed a replication strategy to optimize resource allocation based on the video-distortion technique for unstructured P2P overlay networks. Failure recovery action can be accomplished by distributing high quality impact and popular replicas to regions of low peers density or discontinuous areas. The results demonstrated the efficiency and robustness of the proposed method at compensating for network-induced errors, and the framework can be applied at a range of different scales of free-riding peers. Moreover, the proposed algorithm was able to handle the load imposed on the system efficiently and improved average visual quality in the overall system.
Kao, Tien-Tsai, and 高天財. "Error Tolerability Analysis and Error-Tolerant Design Investigation of A JPEG2000 Image Encoder." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/90365775626282384297.
Full text國立中山大學
電機工程學系研究所
101
JPEG2000 is a new image compression standard formulated by Joint Photographic Experts Group in 2000. There are two modes for image compression in JPEG2000 standard: lossless compression mode and lossy compression mode. Compare with JPEG image compression standard, JPEG2000 can achieve higher compression ratio with the same quality of compressed images, and no blocking artifacts will be generated after images are compressed. Due to the shrinking of transistors, the problems of low yield, low reliability and short lifetime become more serious. Conventional test methods that do not consider human insensitivity to minor noises in audio or video signals may discard many electronic products containing some manufacturing defects. Error-tolerance, which aims to identify not only defect-free chips but also acceptable ones from the discarded parts by the conventional test methodologies, is a promising method to improve the effective yield of chips. In this thesis we analyzed the effects of defects in JPEG2000 image compression circuit on the image quality. Here we focus on the arithmetic computation circuitry in a JPEG 2000 encoder design, namely the discrete wavelet transform and quantization modules. We inject faults in these two parts and then carefully discuss the resulting fault effects in terms of PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity). The experimental results show that some chips with defects will be accepted and we classified them to some different levels for applications. We also provided some re-design suggestions so that will cost down and raising yield.
Hsu, Cheng-Chih, and 許正治. "Design and error analysis of diffractive elements." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/80006703676391084569.
Full text中原大學
物理學系
87
The design of the hybrid element, which comprising a conventional refractive surface and a surface relief diffractive structure, is a new technique. The potential advantage of hybrid elements, which with high diffraction efficiency and various of the design parameters, make it become a powerful technique today. The precision in manufacturing of the diffractive profile is curcial required, since the efficiency will be reduced with the manufacturing errors. Therefore, the error analysis, as well as design, is discussed in this paper. At last, a single diffractive element is designed and used to replace the traditional laser disk lens. The image qualities of this diffractive element are better than the traditional one. In the future, the diffractive element could be widely used in some other optical system.
Varatkar, Girish Vishnu. "Energy-efficient and error-tolerant digital design /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3314923.
Full textSource: Dissertation Abstracts International, Volume: 69-05, Section: B, page: 3194. Adviser: Naresh R. Shanbhag. Includes bibliographical references (leaves 99-105) Available on microfilm from Pro Quest Information and Learning.
Yin, Yu-Fan, and 尹煜帆. "Error Candidate Reduction in Automated Design Debugging." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/28740053744870269145.
Full text國立臺灣大學
電子工程學研究所
100
Given an erroneous design, functional verification returns an error trace containing a mismatch between the specification and the implementation of a design. Automated design debugging utilizes this error trace to identify candidates causing the error. There are remarkable debugging works in handling large designs and long error traces. However, the quality of error candidates remains poor, and it’s hard for designers to locate the actual error source among hundreds or thousands of candidates. This thesis proposes a two-stage debugging framework that reduces error candidate number. The first stage performs conventional debugging algorithm to get initial error candidates. In the second stage, alternative test sequences are generated by error injection, state selection, and error propagation path differentiation techniques. Then, the alternative test sequences are validated to produce alternative error traces. After debugging, redundant candidates can be removed if they are not in the intersection of the original candidate set and the new candidate set. Experimental results show that the proposed algorithm is able to reduce more than 75% error candidates, which demonstrates the viability of this approach in improving design debugging techniques.
Chien, Po-Hao, and 簡伯豪. "Soft-Error Resilient SRAM by Error-Correction Code Design and Implementation for Satellite Application." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/94750821450233630904.
Full textZhang, Ming. "Analysis and design of soft-error tolerant circuits /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223763.
Full textSource: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 4023. Adviser: Naresh R. Shanbhag. Includes bibliographical references (leaves 105-111) Available on microfilm from Pro Quest Information and Learning.
Peng, Chien Chang, and 彭建彰. "Design of Soft Error Tolerant Circuits and Architectures." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/4qfbyx.
Full textWu, Chu-Wen, and 吳主文. "VLSI Design of Timing-Error-Resilient Sorting Hardware." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/frn692.
Full text國立東華大學
電機工程學系
104
As the feature size of chips shrinks with advance of semiconductor technology, the size of transistors and their operating voltage keep decreasing. One of the major problems with advanced semiconductor technology is timing errors caused by variation of process, supply voltage, temperature (PVT). Furthermore, device aging can also cause timing errors. With such problems, conventional worst-case designs suffer poor system performance. When timing errors happen, the computing result from the integrated circuit is incorrect. Although we can employ worst-case frequency for insuring correctness, the performance is sacrificed significantly. Hence, design of timing-error-resilient VLSI circuit with aggressive design approach is more and more important. This thesis proposes a technique for aggressive VLSI design of timing-error resilient sorting hardware, which has an error detection and fault tolerance function. Even if the circuit timing errors occur, the design can still operate and produce the correct output. Although this design is at the expense of a small amount of chip area cost and power consumption, it can achieve reliability for the circuit, while keeps high performance. We have applied the technique to three sorting algorithms, including Bubble Sort, Odd-Even Sort, and Bitonic Sort. Two versions of the sorting hardware are implemented for the three sorters for comparison: one is original sorting hardware and the other is with timing-error-tolerant capability. The implementation results show that our proposed designs achieve tolerance of timing errors with reasonable cost.
Peng, Shih-Lun, and 彭士倫. "A Soft Error Tolerant D Flip-Flop Design." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/56202763496143267495.
Full text國立臺灣大學
電子工程學研究所
100
In recent years, soft error problem is an important reliability issue. Soft errors cause a severe problem especially for memories or flip-flops. When various particles strike on the device, a transient pulse occurs. If this transient pulse flips the data stored in the memory or the flip-flop, a soft error occurs. Soft error can be classified into Single Event Transient (SET), which occurs in the combinational logic; and Single Event Upset (SEU), which occurs in memory elements like flip-flop. Researches propose many designs to tolerate soft error in the flip-flop. However, these designs usually have large performance penalty and occupy large area overhead, so that they are not useful in simple circuits. In this thesis, we propose a new flip-flop:Soft Error Tolerant D Flip-Flop (SETDFF) to tolerate SEU, and has some SET tolerate ability. This SETDFF cell achieves the same performance with general D flip-flop and occupies less area overhead. Our SETDFF design use C-element to tolerate soft error: two inputs of the C-element come from different signals with the same value. When SEU or SET occurs, inputs of the C-element disagree with each other, and C-element rejects the transient pulse and blocks the soft error. Experiment results show that, compared with the BISER architecture proposed by other paper, our SETDFF design achieves similar SEU tolerate ability and better SET tolerate ability. At the same time, our design has 13% and 70% less performance and area overhead penalty. In our SETDFF design, SER has relationship with input arrival time. Therefore, we propose Soft Error Tolerant Time (SETT). The concept is that as long as the data arrives in a time period, we guarantee the SER under the threshold, and this time period is SETT. Circuit designers can achieve the balance between performance and SER according to this information.
Yen, Chia-Chih, and 顏嘉志. "Algorithms for Efficient Design Error Detection and Diagnosis." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/06604751304421872000.
Full text國立交通大學
電子工程系所
93
Functional verification now accounts for most of the time spent in the product development due to the increasing complexity of modern ASIC and SoC designs. Assertion based verification (ABV) helps design teams identify and debug design errors more quickly than traditional techniques. It compares the implementation of a design against its specified assertions by embedding assertions in a design and having them monitor design activities. As a result, ABV is recognized as a critical component of the design verification process. In general, detecting and diagnosing design errors play the most important role in ABV. Unfortunately, the proposed techniques for design error detection cannot keep up with the rapid growth of design complexity. Furthermore, the generated error traces are usually very lengthy such that diagnosing counterexamples becomes very tedious and difficult. In this dissertation, we focus on three strategies that address the problem of efficiently detecting design errors and easing debug process. We first propose a practical cycle bound calculation algorithm for guiding bounded model checking (BMC). Many reports have shown the effectiveness of BMC in design error detection; however, the flaw of BMC is that it needs a pre-computed bound to ensure completeness. To determine the bound, we develop a practical ap-proach that takes branch-and-bound manner. We reduce the search space by applying a partitioning as well as a pruning method. Furthermore, we propose a novel formula-tion and use the SAT-solver to search states and thus determine the cycle bound. Ex-perimental results show that our algorithm considerably enhances the performance compared with the results of the previous work. We then propose an advanced semi-formal verification algorithm for identifying hard-to-detect design errors. Generally, semi-formal verification combines the simula-tive and formal methods to tackle the tough verification problems in a real industrial environment. Nevertheless, the monotonous cooperation of the heterogeneous meth-ods cannot keep pace with the rapid growth of design complexity. Therefore, we pro-pose an elegant algorithm that takes divide-and-conquer paradigm to orchestrate those approaches. We introduce a partitioning technique to recursively divide a design into smaller-sized components. Moreover, we present the approaches of handling each divided component efficiently while keeping the entire design function correct. Ex-perimental results demonstrate that our strategy indeed detects much more design errors than the traditional methods do. At last, we propose powerful error trace compaction algorithms to ease design error diagnosis. Usually, the error traces used to exercise and observe design behavior are very lengthy such that designers need to spend considerable effort to understand them. To alleviate designers’ burden, we present a SAT-based algorithm for reducing the lengths of the error traces. The algorithm not only halves the search space recur-sively but also guarantees to acquire the shortest lengths. Based on the optimum algo-rithm, we also develop two robust heuristics to handle real designs. Experimental results indicate that our approaches greatly surpass previous work and certainly give the promising results.
Liao, Jing-Cheng, and 廖經晟. "Error Diffusion Kernel Design to Improve Halftone Quality." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/04564995140133948501.
Full text國立聯合大學
電子工程學系碩士班
94
The image quality of error-diffusion(ED)halftone can be improved by estimating better error diffusion coefficients. In this paper, we propose two algorithms to get better halftone quality, one is modifying the quantizer threshold in ED to improve halftone, and the other is the ED model fitting of direct binary search(DBS) halftone. In the modifying of the quantizer threshold in ED to improve halftone, we use the information of previous quantizer error, adjust it by constant modification, and add the result to the quantizer input for getting better halftone quality. Additionally, we adopt the IBLS algorithm to estimate an ED kernel, which will yield similar halftone when it is used in the ED process. In the ED model fitting of DBS, we utilize the halftone that produced by the DBS process, adjust the error signal of ED process by iterative comparison, and finally using IBLS to estimate the ED coefficient. The experimental results show apparently that the ED coefficients so obtained can resolve special problems of ED. For example, directional lines etc. Furthermore, we also produce a series of ED kernels estimated from differential resolutions of halftones to form a table, which cab be used for halftoning in different resolutions via simply table lookup.
Peng, Guan-Lin, and 彭冠霖. "VLSI Design of a Timing-Error-Tolerant Microprocessor." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/05087571276420541666.
Full text國立東華大學
資訊工程學系
104
Due to the ever advancing of semiconductor technology, the size of transistors and their operating voltage keep decreasing. Hence, the problems of wires and circuits being susceptible to noise, wire delay, and soft errors are getting worse and worse. One of the challenges we are faced with is timing errors of the circuits. Timing errors may happen when transmitted data arrive later than the timing clock or simply has not enough setup time. With VLSI circuit in advanced manufacturing process, timing errors either result in reduced operational reliability of circuits, or we have to tolerate the much slower clock pessimistically. One of the solutions to these problems is error-resilient design. Error-resilient design of VLSI circuits can detect and even correct errors. Such design is even more important for advanced microprocessors in modern technology for many applications in recently years. This master thesis employs timing-error-tolerant circuits in our designed 5-stage pipelined microprocessor with a 32-bit reduced MIPS instruction set. We implement our design using the cell-based IC design flow with Verilog hardware description language. We have run extensive simulation to validate the timing-error-tolerant capability. We then use logic synthesis to generate the circuit, and static timing analysis to verify that the timing delay meets our goal. The final step is to do automatic place and route, physical verification. We measure the cost we have to pay for the timing-error-tolerant capability. It is shown that our design can improve test coverage of the microprocessor with a reasonable cost.
Luo, Wen-Hua, and 駱文華. "Chip Design of a Burst-Error-Correcting Viterbi Decoder." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/94074650062852055656.
Full text國立臺北科技大學
電腦通訊與控制研究所
89
Abstract The thesis proposes a chip design of burst-error-correcting Viterbi decoder. The decoder can be applied to both random and burst error channels. Firstly, We propose two new algorithms. One is a burst-error-alarm algorithm that is employed to detect the burst errors and another is burst-error-recovery algorithm that uses the recovery circuit to replace the data mixed with burst error. And then send the recovery information to the Viterbi decoder again to correct the random error in the recovery information. To implement above algorithms, we use 0.35μm 1P4M Silicide to design a (2,1,7)(Q=8) burst-error-correcting Viterbi decoder chip. This decoder are composed of three circuits blocks. One is a (2,1,7) Viterbi decoder, one is burst-error-alarm/ burst-error-recovery circuit and the other is control circuit. The chip contains 630K transistors and occupies the area of 3.0 mm by 3.1 mm. Experimental results show that the decoder can achieve the decode rates of 106Mb/s on random error and 69.8Mb/s on burst error under 3.3 V. And the decoder has a net coding gain of 3.6 db at 10-4 bit error rate. Moreover, since we use modular and hierarchy design methodology, we can easily extend our architecture to higher bits VA decoder that will fit the requirement of modern communication system.
Chang, Ming-Da, and 張鳴達. "A Parallel Error Tolerance System Design for JPEG2000 Applications." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/u6mgq3.
Full text國立東華大學
電機工程學系
96
Recently, it’s going to have to provide both high resolution and high quality in multimedia applications, not only JPEG but also JPEG2000 which was bring forth in 2000 are applications that particularly can serve high quality images at low bit-rate. JPEG2000 is the next generation of still image compression system standard, and it is designed for a broad range of data compression applications. The new standard is based on wavelet technology and layered coding in order to provide a rich feature compressed image stream. Discrete wavelet transform (DWT) is the main part of JPEG2000 standard, it uses the lifting structure to achieve forward and reverse transform. After discrete wavelet transform, signals can be split into different subbands, which include information of time and frequency, and they can provide another way to analysis signals. However, the implementations of JPEG2000 codec are susceptible to computer-induced soft errors. And because of that, it becomes an important issue to design a tolerance system with testability. This paper draws up a system which focuses on image compression system JPEG 2000 to design a real-time and efficient fault tolerance with pipeline architecture. It plans to detect and testing the discrete wavelet transform which is the most important part in image compression system JPEG2000 in order to develop the mathematical proof which can’t implement to a hardware or circuit previously. Beside, this paper develops the mathematical proof and uses the original transform processing in discrete wavelet transform to coordinate tolerance weighting and fault tolerance with pipeline architecture and to perform the testing of discrete wavelet transform and thus, when the discrete wavelet transform processing, it can achieve the real-time fault tolerance testing. This system aims at the occurrence of soft error in hardware circuit and the occurrence of soft error in computer-induced to perform testing and fault tolerance. The paper will propose a 「A Parallel Error Tolerance System Design for JPEG2000 Applications」will concentrate on analyzing the fault effect in discrete wavelet transform and utilizing the fault tolerance with pipeline architecture testing for improving the reliability of image compression system JPEG2000 and providing the high quality image.
Hsiao, Wen-Rue, and 蕭文瑞. "Adaptive Compensator Design for Missile Radome Refraction Slope Error." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/39740344454342902219.
Full textLu, Li-Yu, and 盧力瑀. "Design and Implementation of Error Rollback Mechanism for MapReduce." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/06903390387172592623.
Full text樹德科技大學
資訊工程系碩士班
101
With the vigorous development of the Internet, the Internet access has become part of people’s lives. Therefore, it is important for the using huge resources on the internet effectively. The traditional computing is not applicable to the great amount of data nowadays. Consequently, accomplishing the data by using the discrete-computing architecture has come to be the trend. This research confers the platform of the open source software - Apache Hadoop which is using the parallel-computing architecture. When the default architecture, Hadoop MapReduce , comes about the condition of JobTracker Fail ,it will cause the computing disconnection. Besides, when the JobTracker rollback, the Hadoop can’t return the place which affected computing disconnection last time. Therefore, this research brings up the framework which is using the Queue and the Memcache. Because of that, when we face the JobTracker Fail, we can still return to the original state.
Pieterse, Hein. "Towards guidelines for error message design in digital systems." Diss., 2016. http://hdl.handle.net/2263/57499.
Full textDissertation (MIT)--University of Pretoria, 2016.
tm2016
Informatics
MIT
Unrestricted
Ye, Jian-Hong, and 葉建宏. "Mechanical Design and Error Compensation for a Carpentry Machine." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/90757538120339371530.
Full text中原大學
機械工程研究所
104
This paper designed a carpentry machine and analyzed its processing path. The carpentry machine is designed as a gantry platform. The gantry platform is composed by two servo motors to drive the cutting tools moving in X and Y directions. In the Z axis of the gantry platform, a hacksaw machine and a drilling machine are installed to cut and drill. When the carpentry machine cutting wood, it will generate cutting vibration, tool wear and disturbance, etc., to produce some contour error. Thus the preset path is not the same as the desired path and will reduce machining accuracy. To resolve the contour error referred to the lack of coordination on multi-axis motion and errors generation between motion path and processing path, this study used the cross-coupled control (CCC) method to compensate and reduce contour error that made the actual cutting path could be approximated to the preset path.
Chang, Yu-Wen, and 張玉雯. "High-Speed VLSI Architecture Design for Error Correcting Codes." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/51505583478216919282.
Full text義守大學
電機工程學系博士班
94
In this dissertation, we devoted to study the Reed-Solomon (RS) codes and Huffman code for proposing some novel improvement decoding algorithms, which are developed to simplify the implementation of software and hardware for RS codes and Huffman code. First, in Chapter 3, a modified Euclidean decoding algorithm to solve the Berlekamp’s key equation for correcting errors only is presented. It is derived to solve the error locator and error evaluator polynomials simultaneously without performing the operations of polynomial division and field element inversion. In this proposed algorithm, the number of iterations used to solve the equation is fixed, and also the weights used to reduce the degree of the error evaluator polynomial at each iteration can be extracted from the coefficient of fixed position. Therefore, this proposed algorithm saves many controlling circuits, and provides module architecture with regularity. Next, in Chapter 4, a modified decoding algorithm, which is used to correct both errors and erasures in an RS decoder is proposed to improve the idea of Eastman developed. It means that the errata locator and the errata evaluator polynomials can be obtained simultaneously by initializing the Euclideanized BM algorithm with the erasure locator polynomial and the Forney syndromes. On the other hand, this modified algorithm is compared to the modified inverse-free BM algorithm proposed by Truong et al. through a software simulation using C++ language and a hardware simulation using hardware description language. An illustrative example of (255, 239) RS code shows that the speed of the modified inverse-free BM algorithm is still faster than that of the modified Euclideanized BM algorithm in software simulation. However, the modified Euclideanized BM algorithm has lower delay because of the pipeline structure in hardware implementation. Moreover, the VLSI architectures of these proposed algorithms are also developed. There are two main computation units for the VLSI architecture of these proposed algorithms and each unit is constructed using only one simple type of processing element (PE). Therefore, these architectures are much simpler, modular and regular, and as such can be easily configured for various applications. Finally, in Chapter 5, a direct mapping technique (DMT) based on the concepts of the pattern partition and canonical codes for a JPEG Huffman decoder is proposed. There are two merits: First, the run/size symbols and codeword lengths can be obtained directly. Next, it is not necessary to store any AC Huffman codewords in the memory space at the decoding end. In other words, it is a one-time-search to obtain the corresponding symbol and codeword length. Thus the decoding speed will be faster. In addition, the storage usage is more effective because the AC Huffman codewords are not required to be stored in the memory space at the decoding end. Furthermore, a VLSI architecture of this proposed algorithm is also developed and included.