Dissertations / Theses on the topic 'Compression test'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Compression test.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Gattis, Sherri L. "Ruggedized Television Compression Equipment for Test Range Systems." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615062.
Full textThe Wideband Data Protection Program was necessitated from the need to develop digitized, compressed video to enable encryption.
Jas, Abhijit. "Test vector compression techniques for systems-on-chip /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008359.
Full textSjöstrand, Björn. "Evaluation of Compression Testing and Compression Failure Modes of Paperboard : Video analysis of paperboard during short-span compression and the suitability of short- and long-span compression testing of paperboard." Thesis, Karlstads universitet, Institutionen för ingenjörs- och kemivetenskaper (from 2013), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-27519.
Full textNavickas, T. A., and S. G. Jones. "PULSE CODE MODULATION DATA COMPRESSION FOR AUTOMATED TEST EQUIPMENT." International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612065.
Full textDevelopment of automated test equipment for an advanced telemetry system requires continuous monitoring of PCM data while exercising telemetry inputs. This requirements leads to a large amount of data that needs to be stored and later analyzed. For example, a data stream of 4 Mbits/s and a test time of thirty minutes would yield 900 Mbytes of raw data. With this raw data, information needs to be stored to correlate the raw data to the test stimulus. This leads to a total of 1.8 Gb of data to be stored and analyzed. There is no method to analyze this amount of data in a reasonable time. A data compression method is needed to reduce the amount of data collected to a reasonable amount. The solution to the problem was data reduction. Data reduction was accomplished by real time limit checking, time stamping, and smart software. Limit checking was accomplished by an eight state finite state machine and four compression algorithms. Time stamping was needed to correlate stimulus to the appropriate output for data reconstruction. The software was written in the C programming language with a DOS extender used to allow it to run in extended mode. A 94 - 98% compression in the amount of data gathered was accomplished using this method.
Poirier, Régis. "Compression de données pour le test des circuits intégrés." Montpellier 2, 2004. http://www.theses.fr/2004MON20119.
Full textKhayat, Moghaddam Elham. "On low power test and low power compression techniques." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/997.
Full textZacharia, Nadime. "Compression and decompression of test data for scan-based designs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/MQ44048.pdf.
Full textZacharia, Nadime. "Compression and decompression of test data for scan based designs." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20218.
Full textThe design of the decompression unit is treated in depth and a design is proposed that minimizes the amount of extra hardware required. In fact, the design of the decompression unit uses flip-flops already on the chip: it is implemented without inserting any additional flip-flops.
The proposed scheme is applied in two different contexts: (1) in (external) deterministic-stored testing, to reduce the memory requirements imposed on the test equipment; and (2) in built-in self test, to design a test pattern generator capable of generating deterministic patterns with modest area and memory requirements.
Experimental results are provided for the largest ISCAS'89 benchmarks. All of these results point to show that the proposed technique greatly reduces the amount of test data while requiring little area overhead. Compression factors of more than 20 are reported for some circuits.
Pateras, Stephen. "Correlated and cube-contained random patterns : test set compression techniques." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70300.
Full textThe concepts of correlated and cube-contained random patterns can be viewed as methods to compress a deterministic test set into a small amount of information which is then used to control the generation of a superset of the deterministic test set. The goal is to make this superset as small as possible while maintaining its containment of the original test set. The two concepts are meant to be used in either a Built-In Self-Test (BIST) environment or with an external tester when the storage requirements of a deterministic test are too large.
Experimental results show that both correlated and cube-contained random patterns can achieve 100% fault coverage of synthesized circuits using orders or magnitude less patterns than when equiprobable random patterns are used.
Dalmasso, Julien. "Compression de données de test pour architecture de systèmes intégrés basée sur bus ou réseaux et réduction des coûts de test." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20061/document.
Full textWhile microelectronics systems become more and more complex, test costs have increased in the same way. Last years have seen many works focused on test cost reduction by using test data compression. However these techniques only focus on individual digital circuits whose structural implementation (netlist) is fully known by the designer. Therefore, they are not suitable for the testing of cores of a complete system. The goal of this PhD work was to provide a new solution for test data compression of integrated circuits taking into account the paradigm of systems-on-chip (SoC) built from pre-synthesized functions (IPs or cores). Then two systems testing method using compression are proposed for two different system architectures. The first one concerns SoC with IEEE 1500 test architecture (with bus-based test access mechanism), the second one concerns NoC-based systems. Both techniques use test scheduling methods combined with test data compression for better exploration of the design space. The idea is to increase test parallelism with no hardware extra cost. Experimental results performed on system-on-chip benchmarks show that the use of test data compression leads to test time reduction of about 50% at system level
Liu, Yingdi. "Design for test methods to reduce test set size." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6459.
Full textWillis, Stephen, and Bernd Langer. "A Duel Compression Ethernet Camera Solution for Airborne Applications." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577522.
Full textCamera technology is now ubiquitous with smartphones, laptops, automotive and industrial applications frequently utilizing high resolution imagine sensors. Increasingly there is a demand for high-definition cameras in the aerospace market - however, such cameras must have several considerations that do not apply to average consumer use including high reliability and being ruggedized for harsh environments. A significant issue is managing the large volumes of data that one or more HD cameras produce. One method of addressing this issue is to use compression algorithms that reduce video bandwidth. This can be achieved with dedicated compression units or modules within data acquisition systems. For flight test applications it is important that data from cameras is available for telemetry and coherently synchronized while also being available for storage. Ideally the data in the telemetry steam should be highly compressed to preserve downlink bandwidth while the recorded data is lightly compressed to provide maximum quality for onboard/ post flight analysis. This paper discusses the requirements for airborne applications and presents an innovative solution using Ethernet cameras with integrated compression that outputs two steams of data. This removes the need for dedicated video and compression units while offering all the features of such including switching camera sources and optimized video streams.
Limprasert, Tawan. "Behaviour of soil, soil-cement and soil-cement-fiber under multiaxial test." Ohio University / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1179260769.
Full textWegener, John A., and Gordon A. Blase. "AN ONBOARD PROCESSOR FOR FLIGHT TEST DATA ACQUISITION SYSTEMS." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/605592.
Full textToday’s flight test programs are experiencing increasing demands for a greater number of high-rate digital parameters, competition for spectrum space, and a need for operational flexibility in flight test instrumentation. These demands must be met while meeting schedule and budget constraints. To address these various needs, the Boeing Integrated Defense System (IDS) Flight Test Instrumentation group in St. Louis has developed an onboard processing capability for use with airborne instrumentation data collection systems. This includes a first-generation Onboard Processor (OBP) which has been successfully used on the F/A-18E/F Super Hornet flight test program for four years, and which provides a throughput of 5 Mbytes/s and a processing capability of 480 Mflops (floating-point operations per second). Boeing IDS Flight Test is also currently developing a second generation OBP which features greatly enhanced input and output flexibility and algorithm programmability, and is targeted to provide a throughput of 160 Mbytes/s with a processing capability of 16 Gflops. This paper describes these onboard processing capabilities and their benefits.
Palmieri, Giulia. "Diagonal compression tests on masonry panels reinforced with composite materials FRCM." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textLi, Yun, Mårten Sjöström, Ulf Jennehag, Roger Olsson, and Tourancheau Sylvain. "Subjective Evaluation of an Edge-based Depth Image Compression Scheme." Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18539.
Full textJohnson, David Page. "A study of tension, compression, and shear test methods for advanced composites." Thesis, Virginia Tech, 1991. http://hdl.handle.net/10919/42129.
Full textA study of the literature pertaining to test methods for advanced composite materials has
been carried out. Several test methods were discussed and compared for each of three
areas of interest. These areas were uniaxial tension, uniaxial compression and in-plane
shear. Test methods were selected for tension, compression and shear and guidelines set
for the entry of material property data into a comprehensive mechanical property database
being undertaken by Virginia Tech's Center for Composite Materials and Structures
(CCMS). According to the findings, recommendations for future work were made.
Master of Science
Piao, Kun. "An Elevated-Temperature Tension-Compression Test and Its Application to Mg AZ31B." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1316096630.
Full textAmin, Diyar. "Triaxial testing of lime/cement stabilized clay : A comparison with unconfined compression tests." Thesis, KTH, Jord- och bergmekanik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160626.
Full textThis master thesis presents results from a laboratory study on a clay from Enköping which was stabilized with lime and clay. Isotropic consolidated undrained compressive tests were performed on samples and compared to unconfined compressive testing. The two methods have shown no difference in the evaluation of undrained shear strength. However the modulus of elasticity was shown to be much higher for the triaxial tests. For the unconfined compressive tests the relation between the undrained shear strength and secant modulus was within the range of 44-146. The equivalent for the triaxial tests was in the interval of 112-333. However no pattern was extinguishable between the two tests as this relation has varied between 1,0 to 3,5. A lower and higher back pressure was used during the triaxial testing. However, both back pressures have succeeded in saturating the sample. Results show that the back pressure has little effect on the results, as long as the sample has been fully saturated. In addition to this extension tests were performed on samples as well. The tests performed were isotropic consolidated undrained. However two different shearing methods were used. The first test was strain rate dependant while the second test was stress rate dependant. In the first test the vertical stress decreased while the radial stresses were kept constant, while in the other test the radial stresses increased while the vertical stress were kept constant. The undrained shear strength was compared to lime/cement column penetration tests in field. Results showed that tests in field show a much higher undrained shear strength than laboratory testing.
Dogra, Jasween. "The development of a new compression test specimen design for thick laminate composites." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/7121.
Full textKumar, Amit. "Generation of compact test sets and a design for the generation of tests with low switching activity." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1476.
Full textRohrbach, Thomas Juhl. "Investigation of Design, Manufacture, Analysis, and Test of a Composite Connecting Rod Under Compression." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1996.
Full textMoghaddassian, Shahidi Arash. "The development of test procedures for controlling the quality of the manufacture of engineered compression stockings." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/the-development-of-test-procedures-for-controlling-the-quality-of-the-manufacture-of-engineered-compression-stockings(64b8320b-fcfe-4c3f-95e7-ade019527703).html.
Full textMolina, Villegas Alejandro. "Compression automatique de phrases : une étude vers la génération de résumés." Phd thesis, Université d'Avignon, 2013. http://tel.archives-ouvertes.fr/tel-00998924.
Full textLagarde, Guillaume. "Contributions to arithmetic complexity and compression." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC192/document.
Full textThis thesis explores two territories of computer science: complexity and compression. More precisely, in a first part, we investigate the power of non-commutative arithmetic circuits, which compute multivariate non-commutative polynomials. For that, we introduce various models of computation that are restricted in the way they are allowed to compute monomials. These models generalize previous ones that have been widely studied, such as algebraic branching programs. The results are of three different types. First, we give strong lower bounds on the number of arithmetic operations needed to compute some polynomials such as the determinant or the permanent. Second, we design some deterministic polynomial-time algorithm to solve the white-box polynomial identity problem. Third, we exhibit a link between automata theory and non-commutative arithmetic circuits that allows us to derive some old and new tight lower bounds for some classes of non-commutative circuits, using a measure based on the rank of a so-called Hankel matrix. A second part is concerned with the analysis of the data compression algorithm called Lempel-Ziv. Although this algorithm is widely used in practice, we know little about its stability. Our main result is to show that an infinite word compressible by LZ’78 can become incompressible by adding a single bit in front of it, thus closing a question proposed by Jack Lutz in the late 90s under the name “one-bit catastrophe”. We also give tight bounds on the maximal possible variation between the compression ratio of a finite word and its perturbation—when one bit is added in front of it
Koray, Erge. "Numerical And Experimental Analysis Of Indentation." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12605953/index.pdf.
Full textnumerical and experimental investigation of the force-indentation measurements is presented. For indentation tests on anisotropic metals, a novel indenter which is not self similar is used with three transducers to measure the displacements. It is seen that in order to have high repeatability and accuracy at the tests, workpiece and indenter parameters have crucial importance. These parameters in the indentations are analyzed by finite element methods. Ideal dimensions of the workpiece are determined. It is shown that plane strain conditions can only be achieved by embedded indentations. Effect of surface quality and clamping on repeatability are investigated. It is shown that surface treatments have significant effects on the results. Also it is seen that clamping increases the repeatability drastically. Moreover, indentation tests are conducted to verify the results of numerical simulations. Effect of anisotropy on the force-displacement curves is clearly observed.
Fincan, Mustafa. "Assessing Viscoelastic Properties of Polydimethylsiloxane (PDMS) Using Loading and Unloading of the Macroscopic Compression Test." Scholar Commons, 2015. https://scholarcommons.usf.edu/etd/5480.
Full textITO, Hideo, and Gang ZENG. "Low-Cost IP Core Test Using Tri-Template-Based Codes." Institute of Electronics, Information and Communication Engineers, 2007. http://hdl.handle.net/2237/15029.
Full textYahyaoui, Imen. "Contribution au suivi par émission acoustique de l'endommagement des structures multi-matériaux à base de bois." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30266/document.
Full textThe application of wood-concrete-composite hybrid materials in a mechanical structure is increasing day after day. These multi-material structures are both original and mechanically promising. Nevertheless, their use is still recent. This results are in a certain lack of knowledge about their behavior and in particular with regard to the presence of damage which may lead to the degradation of their mechanical properties. In this context, acoustic emission (AE) may be an appropriate non destructive method for the inspection and control of these structures. In order to characterize the evolution of damage in multi-material structures, it is essential to characterize the damage of each constituent material. This work presents the first part of the project for assessment of hybrid structures. It concerns the characterization by acoustic emission of the damage of wood material. One of the difficulties associated with acoustic emission monitoring of wood is the variability and the complexities in its response, because the AE response depends on the structure of the wood specie and the loading condition. In this study, under different mechanical loading (standard tensile, compression and bending tests), damage of three wood species (Douglas fir, Silver fir and poplar) is characterized by the technique of acoustic emission. Results obtained show that the acoustic emission is efficient for the early detection of the damage of the wood material. It also allows to refine the damage scenarios and to differentiate the acoustic signatures of different mechanisms by means of unsupervised pattern recognition algorithms. Moreover, the results confirm that the acoustic response is dependent on the wood specie but also on the loading condition
Bicer, Gokhan. "Experimental And Numerical Analysis Of Compression On A Forging Press." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612155/index.pdf.
Full textGidrão, Salmen Saleme. "Avaliação experimental do grau de confiabilidade dos ensaios à compressão do concreto efetivados em laboratórios." Universidade Federal de Uberlândia, 2014. https://repositorio.ufu.br/handle/123456789/14209.
Full textAs medições de uma grandeza física invariavelmente envolvem erros e incertezas. Os resultados de um ensaio de resistência à compressão do concreto, não estão livres desta regra. Medir é um ato de comparação cujo grau de precisão pode depender de instrumentos, operadores e do próprio processo de medida. Neste trabalho foram analisadas questões que envolvem os fatores intervenientes da qualidade dos resultados dos ensaios de compressão do concreto, e avaliado o grau de confiabilidade dos ensaios realizados por diversos laboratórios. O foco são os erros de medida. Sua organização envolveu uma revisão conceitual sobre qualidade e sua relação com as construções em concreto; na sequência, foi organizada uma aplicação de ensaios para a verificação da confiabilidade dos seus resultados por meio de dois caminhos complementares. O primeiro, para a análise das dispersões de resultados por métodos distintos; e de forma principal pelo método do ensaio referencial, estabelecido a partir de um resultado fixado como padrão; e o outro de maneira a caracterizar os tipos de erros produzidos. Seus resultados, irrefutáveis quanto à metodologia adotada para a produção de seus corpos-deprova, e significativos quanto à estratégia utilizada para a prospecção de dados, permitiram identificar um estado indesejável para as condições que definiram o grau de sua confiabilidade. Classificados como não coerentes, um número considerável dos laboratórios avaliados em três etapas distintas da verificação experimental, apresentaram como resultados de sua medição, números inadequados para a resistência do concreto, não atendendo as expectativas de precisão desejáveis para este importante procedimento do controle de qualidade de sua produção.
Mestre em Engenharia Civil
Assaf, Mansour Hanna. "Digital core output test data compression architecture based on switching theory concepts: Model implementation and analysis." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29008.
Full textLemmon, Heber. "Methods for reduced platen compression (RPC) test specimen cutting locations using micro-CT and planar radiographs." Texas A&M University, 2003. http://hdl.handle.net/1969/310.
Full textLai, Hung-Kuo, and 賴弘國. "Origami-Based Test Data Compression." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/w2eks3.
Full text元智大學
資訊工程學系
107
With the advance of VLSI technology, the increasing in chip density and circuit size have made not only the circuit testing more difficult but also the increasing in the test cost which includes the testing time as well as the test power consumption. Among which testing time is deeply related to the bandwidth of ATE (Automatic Test Equipment). The proposed approach is based on pattern run-length coding to reduce the test data volume as well as testing cost. We use a reverse bit to invers the bits in the reference pattern to increase number of compatible patterns. In addition, we use extend-bits to combine more compatible codewords and to further reduce more test data volume. Experimental results show the proposed approach can reduce test data volume significantly.
Liao, Yu-De, and 廖育德. "Reducing Test Power in Linear Test Data Compression Schemes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/44315923360483562148.
Full textChang, Chih-Ming, and 張志銘. "Test Pattern Compression for Probabilistic Circuit." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/py9z26.
Full text國立臺灣大學
電子工程學研究所
105
Probabilistic circuits are very attractive for the next generation ultra-low power designs. It is important to test probabilistic circuits because a defect in probabilistic circuit may increase the erroneous probability. However, there is no suitable fault model and test generation/compression technique for probabilistic circuits yet. In this paper, a probabilistic fault model is proposed for probabilistic circuits. The number of faults is linear to the gate count. A statistical method is proposed to calculate the repetition needed for each test pattern. An integer linear programming (ILP) method is presented to minimize total test length, while keeping the same fault coverage. Experiments on ISCAS’89 benchmark circuits show the total test length of our proposed ILP method is 2.77 times shorter than a greedy method.
Chang-WenChen and 陳昶聞. "Test Compression with Single-Input Data Spreader and Multiple Test Sessions." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ka6k68.
Full textCHANG, YING-HSING, and 張瑛興. "Axial Compression Test of Hollow Circular Column." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/78002616800479541831.
Full textJhuang, Jin-Kun, and 莊進琨. "Segmented LFSR Reseeding For Test Data Compression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/43666344224392480070.
Full text元智大學
資訊工程學系
104
Test data compression is a popular topic in VLSI testing. It is the key factor that will determine the quality for the final testing results. Built-in-self-test (BIST) is a technique which can test itself and verify the correctness of the circuit under test without any external device involved, resulting in the reduction of test data volume. In this thesis, based on single linear feedback shift register (LFSR) architecture, we proposed to achieve a better test data compression ratio by changing the polynomial and segmenting the test cube. Experimental results show that, we can get great compression ratio by using the method in six larger benchmark ISCAS’89 circuits .
Lee, Lung-Jen, and 李隆仁. "Test Data Compression for Scan-Based Designs." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/20236635742550782954.
Full text元智大學
資訊工程學系
98
A single-chip SOC design consists of a number of modules and intellectual property (IP) cores where a mass of transistors are used. Although increasing integration produces robust designs, many more faults are created accordingly. To detect them, a large amount of test data with longer scan chains is required. This tendency takes longer testing time due to longer stored pattern used in an SOC. This dissertation focuses on investigating a strategy for test data compression. For this purpose, we propose six compression techniques. These techniques can be classified into three categories: the code-based scheme, the linear-decompressor- based scheme, and the broadcast scheme. For the code-based scheme, we propose three compression methods. The first method encodes runs of variable-length patterns in views of multiple dimensions of pattern information. In the second method, compatible (or inversely compatible) patterns inside a single segment and across multiple segments are both considered in improving the compression effect. Experimental results for the large ISCAS’89 benchmark circuits show that this method can achieve up to 67.64% of average compression ratio. The third method is based on the observation that in a well-sorted test sequence, the Hamming distance between consecutive test vectors are very few. If the position of each difference bit is recorded, the successive test vector can be reconstructed from its precedent test vector. Experimental results show that good compression can be achieved. Besides, good adaptability is also demonstrated for industry-scale circuits. For the linear-decompressor-based scheme, we combine multiple LFSRs to reduce both test data volume and test power consumption simultaneously. Results show that an average reduction up to 87.12% of shifting power and 80.92% of average compression ratio can be achieved for the larger ISCAS’89 benchmarks. For the broadcast scheme, we propose two efficient methods. The first method compresses test data by merging compatible columns in the test set repeatedly, such that test data in the corresponding scan cells can be shared. Experimental results for the large ISCAS’89 benchmark circuits has shown 60.39% of test data volume and 59.75% of test application time can be reduced. The second method is titled as “Cascaded Broadcasting for Test Data Compression”. The basic idea is to repeatedly broadcast the compressed test data to compatible scan chains with a gradually-reduced scope according to the compatibility analysis among scan chains. Experimental results have demonstrated that this method is superior to the recently-proposed ones by other researchers.
Yang, Kai-Chieh, and 楊凱傑. "ATPG and Test Compression for Probabilistic Circuits." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2vm53r.
Full text國立臺灣大學
電子工程學研究所
106
Probabilistic circuits are gaining importance in the next generation ultra low-power computing and quantum computing. Unlike testing deterministic circuits, where each test pattern is applied only once, testing probabilistic circuits requires multiple pattern repetitions for each test pattern. However, previous test pattern selection techniques require long test length so it is time consuming. In this thesis, we propose an ATPG algorithm for probabilistic circuits. We use specialized activation and propagation methods to reduce pattern repetitions. Also, we propose to accumulate contribution among different patterns to further reduce pattern repetitions. Experiments on ISCAS’89 benchmark circuits show the total test length of our proposed method is 34% shorter than a greedy method [Chang 17].
Haldigundi, Tapan. "Compression test of aluminium at high temperature." Thesis, 2012. http://ethesis.nitrkl.ac.in/3463/1/FinalThesis.TAPAN_.pdf.
Full text田子坤. "An efficient pseudo exhaustive test pattern generation and test response compression method." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/70783714455495152789.
Full textLee, Jinkyu. "Low power scan testing and test data compression." Thesis, 2006. http://hdl.handle.net/2152/2568.
Full textBaltaji, Najad Borhan. "Scan test data compression using alternate Huffman coding." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5615.
Full texttext
陳徵君. "Stress Analysis of Non-uniform Cylinrical Compression Test." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/44080142203732405348.
Full text國立臺灣大學
機械工程學研究所
84
The formation of the methemoglobin with additions of the SOD/CAT show significantly less than that of LEH without SOD/CAT at temperature of 25℃ and 37℃. For example, there is 8-10% less of methemoglobin formed at temperature of 37℃ for LEH with SOD/CAT. By using 20% met-Hb formation as a criteria for LEH, Hb with SOD/CAT can store in 25℃ for three days but only 3.5 hours for 37℃.
Lu, Chia-Che, and 呂佳哲. "Fibonacci Pattern Run-Length for Test Data Compression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/99478416818766343777.
Full text元智大學
資訊工程學系
104
The density of integrated circuits increases lead to the VLSI technology grows up. Therefore, testing for integrated circuit is more and more complex. Many new techniques have been proposed to reduce test data volume so as to save memory cost and improve the transmission efficiency between Tester (ATE) and SOC. One solution to this problem is to use compression techniques to reduce the volume of test data. The proposed thesis use a pattern run-length based compression method considering Fibonacci sequence. Such as pattern length and number of pattern runs is encoded to denote the compression status. Improvements are experimentally demonstrated on larger ISCAS’89 benchmarks using MinTest. Experimental result show that the average compression rate is increased compared with attempts before.
Yu-Te, Liaw. "A Two-level Test Data Compression and Test Time Reduction Technique for SOC." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2107200507013700.
Full textLiaw, Yu-Te, and 廖育德. "A Two-level Test Data Compression and Test Time Reduction Technique for SOC." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/87668965652681971204.
Full text國立臺灣大學
電子工程學研究所
93
In SOC era, long test time and large test data volume are two serious problems. In this thesis, a two-level test data compression technique is presented to reduce both the test data and the test time for System on a Chip (SOC). The level one compression is achieved by selective-Huffman coding for the entire SOC. The level two compression is achieved by broadcasting test patterns to multiple cores simultaneously. Experiments on the d695 benchmark SOC show that the test data and test time are reduced by 64% and 35%, respectively. This technique requires no change of cores and hence provides a good SOC test integration solution for the SOC assemblers.
Chakravadhanula, Krishna V. Touba Nur A. "New test vector compression techniques based on linear expansion." 2004. http://repositories.lib.utexas.edu/bitstream/handle/2152/1886/chakravadhanulakv042.pdf.
Full text