To see the other types of publications on this topic, follow the link: Compression test.

Dissertations / Theses on the topic 'Compression test'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Compression test.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gattis, Sherri L. "Ruggedized Television Compression Equipment for Test Range Systems." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615062.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
The Wideband Data Protection Program was necessitated from the need to develop digitized, compressed video to enable encryption.
APA, Harvard, Vancouver, ISO, and other styles
2

Jas, Abhijit. "Test vector compression techniques for systems-on-chip /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sjöstrand, Björn. "Evaluation of Compression Testing and Compression Failure Modes of Paperboard : Video analysis of paperboard during short-span compression and the suitability of short- and long-span compression testing of paperboard." Thesis, Karlstads universitet, Institutionen för ingenjörs- och kemivetenskaper (from 2013), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-27519.

Full text
Abstract:
The objectives of the thesis were to find the mechanisms that govern compression failures in paperboard and to find the link between manufacturing process and paperboard properties. The thesis also investigates two different test methods and evaluates how suitable they are for paperboard grades. The materials are several commercial board grades and a set of hand-formed dynamic sheets that are made to mimic the construction of commercial paperboard. The method consists of mounting a stereomicroscope on a short-span compression tester and recording the compression failure on video, long-span compression testing and standard properties testing. The observed failure modes of paperboard under compression were classified into four categories depending on the appearance of the failures. Initiation of failure takes place where the structure is weakest and fiber buckling happens after the initiation, which consists of breaking of fiber-fiber bonds or fiber wall delamination. The compression strength is correlated to density and operations and raw materials that increase the density also increases the compression strength. Short-span compression and Long-span compression are not suitable for testing all kinds of papers; the clamps in short-span give bulky specimens an initial geometrical shape that can affect the given value of compression strength. Long-span compression is only suitable for a limited range of papers, one problem with too thin papers are low wavelength buckling.
APA, Harvard, Vancouver, ISO, and other styles
4

Navickas, T. A., and S. G. Jones. "PULSE CODE MODULATION DATA COMPRESSION FOR AUTOMATED TEST EQUIPMENT." International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612065.

Full text
Abstract:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Development of automated test equipment for an advanced telemetry system requires continuous monitoring of PCM data while exercising telemetry inputs. This requirements leads to a large amount of data that needs to be stored and later analyzed. For example, a data stream of 4 Mbits/s and a test time of thirty minutes would yield 900 Mbytes of raw data. With this raw data, information needs to be stored to correlate the raw data to the test stimulus. This leads to a total of 1.8 Gb of data to be stored and analyzed. There is no method to analyze this amount of data in a reasonable time. A data compression method is needed to reduce the amount of data collected to a reasonable amount. The solution to the problem was data reduction. Data reduction was accomplished by real time limit checking, time stamping, and smart software. Limit checking was accomplished by an eight state finite state machine and four compression algorithms. Time stamping was needed to correlate stimulus to the appropriate output for data reconstruction. The software was written in the C programming language with a DOS extender used to allow it to run in extended mode. A 94 - 98% compression in the amount of data gathered was accomplished using this method.
APA, Harvard, Vancouver, ISO, and other styles
5

Poirier, Régis. "Compression de données pour le test des circuits intégrés." Montpellier 2, 2004. http://www.theses.fr/2004MON20119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khayat, Moghaddam Elham. "On low power test and low power compression techniques." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/997.

Full text
Abstract:
With the ever increasing integration capability of semiconductor technology, today's large integrated circuits require an increasing amount of data to test them which increases test time and elevated requirements of tester memory. At the same time, as VLSI design sizes and their operating frequencies continue to increase, timing-related defects are high proportion of the total chip defects and at-speed test is crucial. DFT techniques are widely used in order to improve the testability of a design. While DFT techniques facilitate generation and application of tests, they may cause the test vectors to contain non-functional states which result in higher switching activities compared to the functional mode of operation. Excessive switching activity causes higher power dissipation as well as higher peak supply currents. Excessive power dissipation may cause hot spots that could cause damage the circuit. Excessive peak supply currents may cause higher IR drops which increase signal propagation delays during test causing yield loss. Several methods have been proposed to reduce the switching activity in the circuit under test during shift and capture cycles. While these methods reduce switching activity during test and eliminate the abnormal IR drop, circuits may now operate faster on the tester than they would in the actual system. For speed related and high resistance defect mechanisms, this type of undertesting means that the device could be rejected by the systems integrator or by the end consumer and thus increasing the DPPM of the devices. Therefore, it is critical to ensure that the peak switching activity generated during the two functional clock cycles of an at-speed test is as close as possible to the functional switching activity levels specified for the device. The first part of this dissertation proposes a new method to generate test vectors that mimic functional operation from the switching activity point of view. It uses states obtained by applying a number of functional clock cycles starting from the scan-in state of a test vector to fill unspecified scan cells in test cubes. Experimental results indicate that for industrial designs, the proposed techniques can reduce the peak capture switching on average by 49% while keeping the quality of test very close to conventional ATPG. The second part of this dissertation addresses IR-drop and power minimization techniques in embedded deterministic test environment. The proposed technique employs a controller that allows a given scan chain to be driven by either the decompressor or pseudo functional background. Experimental results indicate an average of 36% reduction in peak switching activity during capture using the proposed technique. In the last part of this dissertation, a new low power test data compression scheme using clock gater circuitry is proposed to simultaneously reduce test data volume and test power by enabling only a subset of the scan chains in each test phase. Since, most of the total power during test is typically in clock tree, by disabling significant portion of clock tree in each test phase, significant reduction in the test power in both combinational logic and clock distribution network are achieved. Using this technique, transitions in the scan chains during both loading of test stimuli and unloading of test responses decrease which will permit increased scan shift frequency and also increase in the number of cores that can be tested in parallel in multi-core designs. The proposed method has the ability of decreasing, in a power aware fashion, the test data volume. Experimental results presented for industrial designs demonstrate that on average reduction factors of 2 and 4 in test data volume and test power are achievable, respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Zacharia, Nadime. "Compression and decompression of test data for scan-based designs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/MQ44048.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zacharia, Nadime. "Compression and decompression of test data for scan based designs." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20218.

Full text
Abstract:
Traditional methods to test integrated circuits (ICs) require enormous amount of memory, which make them increasingly expensive and unattractive. This thesis addresses this issue for scan-based designs by proposing a method to compress and decompress input test patterns. By storing the test patterns in a compressed format, the amount of memory required to test ICs can be reduced to manageable levels. The thesis describes the compression and decompression scheme in details. The proposed method relies on the insertion of a decompression unit on the chip. During test application, the patterns are decompressed by the decompression unit as they are applied. Hence, decompression is done on-the-fly in hardware and does not slow down test application.
The design of the decompression unit is treated in depth and a design is proposed that minimizes the amount of extra hardware required. In fact, the design of the decompression unit uses flip-flops already on the chip: it is implemented without inserting any additional flip-flops.
The proposed scheme is applied in two different contexts: (1) in (external) deterministic-stored testing, to reduce the memory requirements imposed on the test equipment; and (2) in built-in self test, to design a test pattern generator capable of generating deterministic patterns with modest area and memory requirements.
Experimental results are provided for the largest ISCAS'89 benchmarks. All of these results point to show that the proposed technique greatly reduces the amount of test data while requiring little area overhead. Compression factors of more than 20 are reported for some circuits.
APA, Harvard, Vancouver, ISO, and other styles
9

Pateras, Stephen. "Correlated and cube-contained random patterns : test set compression techniques." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70300.

Full text
Abstract:
Two novel methods to reduce the number of random test patterns required to fully test a circuit are proposed in this thesis. In the concept of correlated random patterns, reductions in a circuit's random pattern test length are achieved by taking advantage of correlations measured between values applied at different input positions in a complete deterministic test set. Instead of being generated independently, correlated inputs have their random values generated from a common source with each input's value then individually biased at a rate necessary to match the measured correlation. In the concept of cube-contained random patterns, reductions in random pattern test lengths are achieved by the successive assignment of temporarily fixed values to selected inputs during the random pattern generation process.
The concepts of correlated and cube-contained random patterns can be viewed as methods to compress a deterministic test set into a small amount of information which is then used to control the generation of a superset of the deterministic test set. The goal is to make this superset as small as possible while maintaining its containment of the original test set. The two concepts are meant to be used in either a Built-In Self-Test (BIST) environment or with an external tester when the storage requirements of a deterministic test are too large.
Experimental results show that both correlated and cube-contained random patterns can achieve 100% fault coverage of synthesized circuits using orders or magnitude less patterns than when equiprobable random patterns are used.
APA, Harvard, Vancouver, ISO, and other styles
10

Dalmasso, Julien. "Compression de données de test pour architecture de systèmes intégrés basée sur bus ou réseaux et réduction des coûts de test." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20061/document.

Full text
Abstract:
Les circuits intégrés devenant de plus en plus complexes, leur test demande des efforts considérables se répercutant sur le coût de développement et de production de ces composants. De nombreux travaux ont donc porté sur la réduction du coût de ce test en utilisant en particulier les techniques de compression de données de test. Toutefois ces techniques n'adressent que des coeurs numériques dont les concepteurs détiennent la connaissance de toutes les informations structurelles et donc en pratique n'adressent que le test de sous-blocs d'un système complet. Dans cette thèse, nous proposons tout d'abord une nouvelle technique de compression des données de test pour les circuits intégrés compatible avec le paradigme de la conception de systèmes (SoC) à partir de fonctions pré-synthétisées (IPs ou coeurs). Puis, deux méthodes de test des systèmes utilisant la compression sont proposées. La première est relative au test des systèmes SoC utilisant l'architecture de test IEEE 1500 (avec un mécanisme d'accès au test de type bus), la deuxième concerne le test des systèmes pour lesquels la communication interne s'appuie sur des structures de type réseau sur puce (NoC). Ces deux méthodes utilisent conjointement un ordonnancement du test des coeurs du système avec une technique de compression horizontale afin d'augmenter le parallélisme du test des coeurs constituant le système et ce, à coût matériel constant. Les résultats expérimentaux sur des systèmes sur puces de référence montrent des gains de l'ordre de 50% sur le temps de test du système complet
While microelectronics systems become more and more complex, test costs have increased in the same way. Last years have seen many works focused on test cost reduction by using test data compression. However these techniques only focus on individual digital circuits whose structural implementation (netlist) is fully known by the designer. Therefore, they are not suitable for the testing of cores of a complete system. The goal of this PhD work was to provide a new solution for test data compression of integrated circuits taking into account the paradigm of systems-on-chip (SoC) built from pre-synthesized functions (IPs or cores). Then two systems testing method using compression are proposed for two different system architectures. The first one concerns SoC with IEEE 1500 test architecture (with bus-based test access mechanism), the second one concerns NoC-based systems. Both techniques use test scheduling methods combined with test data compression for better exploration of the design space. The idea is to increase test parallelism with no hardware extra cost. Experimental results performed on system-on-chip benchmarks show that the use of test data compression leads to test time reduction of about 50% at system level
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Yingdi. "Design for test methods to reduce test set size." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6459.

Full text
Abstract:
With rapid development in semiconductor technology, today's large and complex integrated circuits require a large amount of test data to achieve desired test coverage. Test cost, which is proportional to the size of the test set, can be reduced by generating a small number of highly effective test patterns. Automatic Test Pattern Generators (ATPGs) generate effective deterministic test patterns for different fault models and can achieve high test coverage. To reduce ATPG-produced test set size, design for test (DFT) methods can be used to further improve the ATPG process and apply generated test patterns in more efficient ways. The first part of this dissertation introduces a test point insertion (TPI) technique that reduces the test pattern counts and test data volume of a design by adding additional hardware called control points. These dedicated control points are inserted at internal nodes of the design to resolve large internal conflicts during ATPG. Therefore, more faults can be detected by a single test pattern. To minimize silicon area needed to implement these control points, we propose a method that reuses some existing functional flip-flops as drivers of the control points, instead of inserting dedicated flip-flops for the control points. Experimental results on industrial designs indicate that the proposed technique can achieve significant test pattern reductions, similar to the control points using dedicated flip-flops. The second part of this dissertation proposes a staggered ATPG scheme that produces deterministic test-per-clock-based staggered test patterns by using dedicated compactor scan chains to capture additional test responses during scan shift cycles that are used for regular scan cells to completely load each test pattern. These compactor scan chains are formed by dedicated capture-per-cycle observation test points inserted at suitable locations of the design. By leveraging this new scan infrastructure, more compacted test patterns can be generated, and more faults can also be systematically detected during the simulation process, thus reducing the overall test pattern count. To meet the stringent test requirements for in-system test (especially for automotive test), a built-in self-test (BIST) approach, called Stellar BIST, is introduced in the last part of this dissertation. Stellar BIST employs a dedicated BIST infrastructure with additional on-system memory to store some parent test patterns (seeds). Derivative test patterns can be obtained by complementing selected bits of corresponding parent patterns through an on-chip Stellar BIST controller. A dedicated ATPG process is also proposed for generating a minimal set of test patterns that need to be stored and their effective derivative patterns that require short test application time. Furthermore, the proposed scheme can provide flexible trade-offs between stored test data volume and test application time.
APA, Harvard, Vancouver, ISO, and other styles
12

Willis, Stephen, and Bernd Langer. "A Duel Compression Ethernet Camera Solution for Airborne Applications." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577522.

Full text
Abstract:
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA
Camera technology is now ubiquitous with smartphones, laptops, automotive and industrial applications frequently utilizing high resolution imagine sensors. Increasingly there is a demand for high-definition cameras in the aerospace market - however, such cameras must have several considerations that do not apply to average consumer use including high reliability and being ruggedized for harsh environments. A significant issue is managing the large volumes of data that one or more HD cameras produce. One method of addressing this issue is to use compression algorithms that reduce video bandwidth. This can be achieved with dedicated compression units or modules within data acquisition systems. For flight test applications it is important that data from cameras is available for telemetry and coherently synchronized while also being available for storage. Ideally the data in the telemetry steam should be highly compressed to preserve downlink bandwidth while the recorded data is lightly compressed to provide maximum quality for onboard/ post flight analysis. This paper discusses the requirements for airborne applications and presents an innovative solution using Ethernet cameras with integrated compression that outputs two steams of data. This removes the need for dedicated video and compression units while offering all the features of such including switching camera sources and optimized video streams.
APA, Harvard, Vancouver, ISO, and other styles
13

Limprasert, Tawan. "Behaviour of soil, soil-cement and soil-cement-fiber under multiaxial test." Ohio University / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1179260769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wegener, John A., and Gordon A. Blase. "AN ONBOARD PROCESSOR FOR FLIGHT TEST DATA ACQUISITION SYSTEMS." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/605592.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Today’s flight test programs are experiencing increasing demands for a greater number of high-rate digital parameters, competition for spectrum space, and a need for operational flexibility in flight test instrumentation. These demands must be met while meeting schedule and budget constraints. To address these various needs, the Boeing Integrated Defense System (IDS) Flight Test Instrumentation group in St. Louis has developed an onboard processing capability for use with airborne instrumentation data collection systems. This includes a first-generation Onboard Processor (OBP) which has been successfully used on the F/A-18E/F Super Hornet flight test program for four years, and which provides a throughput of 5 Mbytes/s and a processing capability of 480 Mflops (floating-point operations per second). Boeing IDS Flight Test is also currently developing a second generation OBP which features greatly enhanced input and output flexibility and algorithm programmability, and is targeted to provide a throughput of 160 Mbytes/s with a processing capability of 16 Gflops. This paper describes these onboard processing capabilities and their benefits.
APA, Harvard, Vancouver, ISO, and other styles
15

Palmieri, Giulia. "Diagonal compression tests on masonry panels reinforced with composite materials FRCM." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
In the last years, due to many severe seismic events, it becomes more and more important to understand the structural performance of masonry structures subjected to seismic actions and structural reinforce became an important task in civil engineering. The developments of innovative techniques for structural retrofitting represent a great change in order to reduce the seismic vulnerability of masonry buildings. Beside the traditional reinforce techniques, new reinforce born such as Fiber Reinforced Polymer (FRP) and Fiber Reinforced Cementitious Matrix (FRCM) The aim of this thesis is to study the shear strength of masonry panels, subjected to in-plane actions, reinforced with FRCM.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Yun, Mårten Sjöström, Ulf Jennehag, Roger Olsson, and Tourancheau Sylvain. "Subjective Evaluation of an Edge-based Depth Image Compression Scheme." Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-18539.

Full text
Abstract:
Multi-view three-dimensional television requires many views, which may be synthesized from two-dimensional images with accompanying pixel-wise depth information. This depth image, which typically consists of smooth areas and sharp transitions at object borders, must be consistent with the acquired scene in order for synthesized views to be of good quality. We have previously proposed a depth image coding scheme that preserves significant edges and encodes smooth areas between these. An objective evaluation considering the structural similarity (SSIM) index for synthesized views demonstrated an advantage to the proposed scheme over the High Efficiency Video Coding (HEVC) intra mode in certain cases. However, there were some discrepancies between the outcomes from the objective evaluation and from our visual inspection, which motivated this study of subjective tests. The test was conducted according to ITU-R BT.500-13 recommendation with Stimulus-comparison methods. The results from the subjective test showed that the proposed scheme performs slightly better than HEVC with statistical significance at majority of the tested bit rates for the given contents.
APA, Harvard, Vancouver, ISO, and other styles
17

Johnson, David Page. "A study of tension, compression, and shear test methods for advanced composites." Thesis, Virginia Tech, 1991. http://hdl.handle.net/10919/42129.

Full text
Abstract:

A study of the literature pertaining to test methods for advanced composite materials has been carried out. Several test methods were discussed and compared for each of three areas of interest. These areas were uniaxial tension, uniaxial compression and in-plane shear. Test methods were selected for tension, compression and shear and guidelines set for the entry of material property data into a comprehensive mechanical property database being undertaken by Virginia Tech's Center for Composite Materials and Structures (CCMS). According to the findings, recommendations for future work were made.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
18

Piao, Kun. "An Elevated-Temperature Tension-Compression Test and Its Application to Mg AZ31B." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1316096630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Amin, Diyar. "Triaxial testing of lime/cement stabilized clay : A comparison with unconfined compression tests." Thesis, KTH, Jord- och bergmekanik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160626.

Full text
Abstract:
Detta examensarbete presenterar resultat från en laboratoriestudie på en lera från Enköping stabiliserad med kalk och cement. I laboratoriet har isotropiskt konsoliderade odränerade aktiva triaxialförsök utförts på provkroppar och jämförts med enaxliga tryckförsök som utförts på provkroppar från samma inblandningstillfälle. De två metoderna har visat sig ge likvärdiga värden på utvärderad odränerad skjuvhållfasthet. Elasticitetsmodulen har däremot visat sig vara mycket högre för triaxialförsöken än enaxliga tryckförsök. För triaxialförsöken har förhållandet mellan sekantmodulen och den odränerade skjuvhållfastheten legat mellan 112-333. För de enaxliga tryckförsöken ligger förhållandet mellan sekantmodulen och den odränerade skjuvhållfastheten inom intervallet 44-146. Inget mönster har dock kunnat urskiljas då förhållandet mellan de två olika försöken har varierat mellan 1,0-3,5. Ett lägre och högre back pressure användes under triaxialförsöken. Till skillnad från tidigare studier har dock båda dessa back pressures vattenmättat provkroppen. Resultaten visar på att back pressure inte påverkar testresultaten, förutsatt att provet blivit fullt vattenmättat. Utöver denna jämförelse har ytterligare passiva triaxialförsök utförts. De passiva triaxialförsöken har utförts som isotropiskt konsoliderade odränerade försök.. Däremot har två olika metoder använts under skjuvningsfasen. I första typen av försök har den axiella spänningen minskats medan den radiella spänningen har hållits konstant. I den andra typen av försök har i stället den radiella spänningen ökats samtidigt som den axiella spänningen har hållits konstant. Skjuvhållfastheter har jämförts med resultat från kalkpelarsondering i fält och visar på att skjuvhållfastheten genomgående varit högre i fält än i laboratoriet. Dessutom har skjuvhållfastheter och elasticitetsmoduler testats efter olika lagringstider genom enaxliga tryckförsök.
This master thesis presents results from a laboratory study on a clay from Enköping which was stabilized with lime and clay. Isotropic consolidated undrained compressive tests were performed on samples and compared to unconfined compressive testing. The two methods have shown no difference in the evaluation of undrained shear strength. However the modulus of elasticity was shown to be much higher for the triaxial tests. For the unconfined compressive tests the relation between the undrained shear strength and secant modulus was within the range of 44-146. The equivalent for the triaxial tests was in the interval of 112-333. However no pattern was extinguishable between the two tests as this relation has varied between 1,0 to 3,5. A lower and higher back pressure was used during the triaxial testing. However, both back pressures have succeeded in saturating the sample. Results show that the back pressure has little effect on the results, as long as the sample has been fully saturated. In addition to this extension tests were performed on samples as well. The tests performed were isotropic consolidated undrained. However two different shearing methods were used. The first test was strain rate dependant while the second test was stress rate dependant. In the first test the vertical stress decreased while the radial stresses were kept constant, while in the other test the radial stresses increased while the vertical stress were kept constant. The undrained shear strength was compared to lime/cement column penetration tests in field. Results showed that tests in field show a much higher undrained shear strength than laboratory testing.
APA, Harvard, Vancouver, ISO, and other styles
20

Dogra, Jasween. "The development of a new compression test specimen design for thick laminate composites." Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/7121.

Full text
Abstract:
A new specimen design for determining the compression strength of thick unidirectional laminate composites has been developed using finite element simulations and validated by experimental testing. The computational models included parts of the testing fixture. The materials used for experiments were carbon fibre/epoxy T300/914 from Hexcel Composites and IM7/8552. An understanding has been developed to explain why, using the standard, parallel sided design, for testing specimens in compression and using the ICSTM fixture, specimens using a laminate thicker than 2 mm do not fail in an acceptable way. Initially, simulation and experimental parametric studies were carried out to investigate the effects of loading and design conditions on the fixture and specimen in order to change the stress distribution in the 2 mm thick, parallel sided, 10 mm x 10 mm gauge section specimen. In addition, in order to optimise the specimen itself, different adhesives for bonding end tabs to the laminate were investigated, as were the end tab design and material used in their manufacture. Subsequent simulations showed that the use of an extended and waisted gauge length of either circular or s-shaped profile both caused thick laminate specimens to fail close to the centre of the gauge length. The predicted strength being similar to that measured for a 2 mm thick, parallel sided specimen using the optimised design. Experimental compression strength data from thick laminate specimens with the circular and s-shaped profiles machined into the gauge section validated the finite element results; the strengths achieved being almost identical to those for the 2mm thick laminates. Results from the analysis of the standard design and some preliminary work on the waisted design were presented at a conference [52]. Results for further work on the waisted design and experimental details have been reported in [51] and [75].
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Amit. "Generation of compact test sets and a design for the generation of tests with low switching activity." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1476.

Full text
Abstract:
Test generation procedures for large VLSI designs are required to achieve close to 100% fault coverage using a small number of tests. They also must accommodate on-chip test compression circuits which are widely used in modern designs. To obtain test sets with small sizes one could use extra hardware such as test points or use software techniques. An important aspects impacting test generation is the number of specified positions, which facilitate the encoding of test cubes when using test compression logic. Fortuitous detection or generation of tests such that they facilitate detection of yet not targeted faults, is also an important goal for test generation procedures. At first, we consider the generation of compact test sets for designs using on-chip test compression logic. We introduce two new measures to guide automatic test generation procedures (ATPGs) to balance between these two contradictory requirements of fortuitous detection and number of specifications. One of the new measures is meant to facilitate detection of yet undetected faults, and the value of the measures is periodically updated. The second measure reduces the number of specified positions, which is crucial when using high compression. Additionally, we introduce a way to randomly choose between the two measures. We also propose an ATPG methodology tailored for BIST ready designs with X-bounding logic and test points. X-bounding and test points used to have a significant impact on test data compression by reducing the number of specified positions. We propose a new ATPG guidance mechanism that balances between reduced specifications in BIST ready designs, and also facilitates detection of undetected faults. We also found that compact test generation for BIST ready designs is influenced by the order in which faults are targeted, and we proposed a new fault ordering technique based on fault location in a FFR. Transition faults are difficult to test and often result in longer test lengths, we propose a new fault ordering technique based on test enumeration, this ordering technique and a new guidance approach was also proposed for transition faults. Test set sizes were reduced significantly for both stuck-at and transition fault models. In addition to reducing data volume, test time, and test pin counts, the test compression schemes have been used successfully to limit test power dissipation. Indisputably, toggling of scan cells in scan chains that are universally used to facilitate testing of industrial designs can consume much more power than a circuit is rated for. Balancing test set sizes against the power consumption in a given design is therefore a challenge. We propose a new Design for Test (DFT) scheme that deploys an on-chip power-aware test data decompressor, the corresponding test cube encoding method, and a compression-constrained ATPG that allows loading scan chains with patterns having low transition counts, while encoding a significant number of specified bits produced by ATPG in a compression-friendly manner. Moreover, the new scheme avoids periods of elevated toggling in scan chains and reduces scan unload switching activity due to unique test stimuli produced by the new technique, leading to a significantly reduced power envelope for the entire circuit under test.
APA, Harvard, Vancouver, ISO, and other styles
22

Rohrbach, Thomas Juhl. "Investigation of Design, Manufacture, Analysis, and Test of a Composite Connecting Rod Under Compression." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1996.

Full text
Abstract:
Composite materials hold great potential for the replacement of traditional materials in machines utilized on a daily basis. One such example is within an engine block assembly where massive components inherently reduce the efficiency of the system they constitute. By replacing metal elements such as connecting rods, cylinder caps, or a crank shaft with composite alternatives, a significant increase in performance may be achieved with respect to mechanical strength, thermal stability, and durability, while also reducing mass. Exploration of this technology applied to a connecting rod geometry was investigated through a combination of process development, manufacturing, numerical analysis and testing. Process development explored composite material options based on experimental characterization, fabrication, and machining methods. Finite element analysis provided insight into model and data accuracy, as well as a basis for study on a unidirectional composite I-beam geometry. Destructive testing of the composite connecting rods provided data for a strength to weight ratio comparison with the original steel component. The composite connecting rods exhibited weight savings of 15%-17% that of the steel component. The rod made of woven composite material exhibited an elastic modulus of 68.1 Msi in its linear behavior before failure, thereby exhibiting a higher stiffness than the steel rod tested. Although the failure strengths were 25% below the required design load, the calculated strength to weight ratios showed favor for the composite alternatives.
APA, Harvard, Vancouver, ISO, and other styles
23

Moghaddassian, Shahidi Arash. "The development of test procedures for controlling the quality of the manufacture of engineered compression stockings." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/the-development-of-test-procedures-for-controlling-the-quality-of-the-manufacture-of-engineered-compression-stockings(64b8320b-fcfe-4c3f-95e7-ade019527703).html.

Full text
Abstract:
A new technology platform known as 'Scan2Knit' was invented in the William Lee innovation Centre of the University of Manchester to engineer and manufacture compression stockings for the treatment of venous disease in a Welcome Trust funded research project. The intellectual property of the above technology has been licensed for commercial exploitation by the University.The graduated pressure profile that is necessary for the treatment of venous ulcers is generated with the engineered compression stocking, and will depend on the stitch length of the knitted fabric structure and an empirical pressure profile database. The 'Scan2Knit' technology was developed to produce an engineered compression stocking on a 18 gauge Stoll CMS computerised flat-bed knitting machine utilising a microprocessor controlled precision positive yarn delivery system to guarantee the delivery of a predetermined stitch length to the knitting needles. However, the licensee of the technology has decided to manufacture engineered compressions stockings by using 14 gauge Stoll CMS flat-bed knitting machines instead of gauge 18 machines due to commercial advantages. Therefore, the main aim of this work is to investigate the transfer of 'Scan2Knit' technology on to a coarse gauge manufacturing platform to produce engineered compression stockings. The investigation focuses on two vital requirements of 'Scan2Knit' technology; the analysis of the performance of the precision positive yarn delivery system on the new production platform and the evaluation of the functionality of the knitted structure produced with it. The objectives of the research are to develop test procedures for the evaluation of the three dimensional pressure characteristic of compression stockings manufactured on the new production platform, and the performance of the precision yarn delivery system. To produce the engineered compression stockings with the 'Scan2Knit' technology, it is essential to determine the interface pressure that the knitted structure would impart on a particular radius of curvature at a predetermined strain percentage which is attained with an empirical database. Hence, a key objective of this study is to develop a methodology, which is efficient and user friendly, for the generation of the empirical pressure profile database required to engineer the interface pressure profile of a compression stocking.It is envisioned that the manufacturer of the engineered compression stockings would benefit by the knowledge generated within this research, and develop their own quality assurance procedures to guarantee that the compression stockings are produced to deliver the graduated pressure profile prescribed by the clinician for the treatment of venous ulcers.
APA, Harvard, Vancouver, ISO, and other styles
24

Molina, Villegas Alejandro. "Compression automatique de phrases : une étude vers la génération de résumés." Phd thesis, Université d'Avignon, 2013. http://tel.archives-ouvertes.fr/tel-00998924.

Full text
Abstract:
Cette étude présente une nouvelle approche pour la génération automatique de résumés, un des principaux défis du Traitement de la Langue Naturelle. Ce sujet, traité pendant un demi-siècle par la recherche, reste encore actuel car personne n'a encore réussi à créer automatiquement des résumés comparables, en qualité, avec ceux produits par des humains. C'est dans ce contexte que la recherche en résumé automatique s'est divisée en deux grandes catégories : le résumé par extraction et le résumé par abstraction. Dans le premier, les phrases sont triées de façon à ce que les meilleures conforment le résumé final. Or, les phrases sélectionnées pour le résumé portent souvent des informations secondaires, une analyse plus fine s'avère nécessaire.Nous proposons une méthode de compression automatique de phrases basée sur l'élimination des fragments à l'intérieur de celles-ci. À partir d'un corpus annoté, nous avons créé un modèle linéaire pour prédire la suppression de ces fragments en fonction de caractéristiques simples. Notre méthode prend en compte trois principes : celui de la pertinence du contenu, l'informativité ; celui de la qualité du contenu, la grammaticalité, et la longueur, le taux de compression. Pour mesurer l'informativité des fragments,nous utilisons une technique inspirée de la physique statistique : l'énergie textuelle.Quant à la grammaticalité, nous proposons d'utiliser des modèles de langage probabilistes.La méthode proposée est capable de générer des résumés corrects en espagnol.Les résultats de cette étude soulèvent divers aspects intéressants vis-à- vis du résumé de textes par compression de phrases. On a observé qu'en général il y a un haut degré de subjectivité de la tâche. Il n'y a pas de compression optimale unique mais plusieurs compressions correctes possibles. Nous considérons donc que les résultats de cette étude ouvrent la discussion par rapport à la subjectivité de l'informativité et son influence pour le résumé automatique.
APA, Harvard, Vancouver, ISO, and other styles
25

Lagarde, Guillaume. "Contributions to arithmetic complexity and compression." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC192/document.

Full text
Abstract:
Cette thèse explore deux territoires distincts de l’informatique fondamentale : la complexité et la compression. Plus précisément, dans une première partie, nous étudions la puissance des circuits arithmétiques non commutatifs, qui calculent des polynômes non commutatifs en plusieurs indéterminées. Pour cela, nous introduisons plusieurs modèles de calcul, restreints dans leur manière de calculer les monômes. Ces modèles en généralisent d’autres, plus anciens et largement étudiés, comme les programmes à branchements. Les résultats sont de trois sortes. Premièrement, nous donnons des bornes inférieures sur le nombre d’opérations arithmétiques nécessaires au calcul de certains polynômes tels que le déterminant ou encore le permanent. Deuxièmement, nous concevons des algorithmes déterministes fonctionnant en temps polynomial pour résoudre le problème du test d’identité polynomiale. Enfin, nous construisons un pont entre la théorie des automates et les circuits arithmétiques non commutatifs, ce qui nous permet de dériver de nouvelles bornes inférieures en utilisant une mesure reposant sur le rang de la matrice dite de Hankel, provenant de la théorie des automates. Une deuxième partie concerne l’analyse de l’algorithme de compression sans perte Lempel-Ziv. Pourtant très utilisé, sa stabilité est encore mal établie. Vers la fin des années 90s, Jack Lutz popularise la question suivante, connue sous le nom de « one-bit catastrophe » : « étant donné un mot compressible, est-il possible de le rendre incompressible en ne changeant qu’un seul bit ? ». Nous montrons qu’une telle catastrophe est en effet possible. Plus précisément, en donnant des bornes optimales sur la variation de la taille de la compression, nous montrons qu’un mot « très compressible » restera toujours compressible après modification d’un bit, mais que certains mots « peu compressibles » deviennent en effet incompressibles
This thesis explores two territories of computer science: complexity and compression. More precisely, in a first part, we investigate the power of non-commutative arithmetic circuits, which compute multivariate non-commutative polynomials. For that, we introduce various models of computation that are restricted in the way they are allowed to compute monomials. These models generalize previous ones that have been widely studied, such as algebraic branching programs. The results are of three different types. First, we give strong lower bounds on the number of arithmetic operations needed to compute some polynomials such as the determinant or the permanent. Second, we design some deterministic polynomial-time algorithm to solve the white-box polynomial identity problem. Third, we exhibit a link between automata theory and non-commutative arithmetic circuits that allows us to derive some old and new tight lower bounds for some classes of non-commutative circuits, using a measure based on the rank of a so-called Hankel matrix. A second part is concerned with the analysis of the data compression algorithm called Lempel-Ziv. Although this algorithm is widely used in practice, we know little about its stability. Our main result is to show that an infinite word compressible by LZ’78 can become incompressible by adding a single bit in front of it, thus closing a question proposed by Jack Lutz in the late 90s under the name “one-bit catastrophe”. We also give tight bounds on the maximal possible variation between the compression ratio of a finite word and its perturbation—when one bit is added in front of it
APA, Harvard, Vancouver, ISO, and other styles
26

Koray, Erge. "Numerical And Experimental Analysis Of Indentation." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12605953/index.pdf.

Full text
Abstract:
Indentation tests are widely used with simultaneous measurements of indentation depth and force especially for determining material properties. In this study
numerical and experimental investigation of the force-indentation measurements is presented. For indentation tests on anisotropic metals, a novel indenter which is not self similar is used with three transducers to measure the displacements. It is seen that in order to have high repeatability and accuracy at the tests, workpiece and indenter parameters have crucial importance. These parameters in the indentations are analyzed by finite element methods. Ideal dimensions of the workpiece are determined. It is shown that plane strain conditions can only be achieved by embedded indentations. Effect of surface quality and clamping on repeatability are investigated. It is shown that surface treatments have significant effects on the results. Also it is seen that clamping increases the repeatability drastically. Moreover, indentation tests are conducted to verify the results of numerical simulations. Effect of anisotropy on the force-displacement curves is clearly observed.
APA, Harvard, Vancouver, ISO, and other styles
27

Fincan, Mustafa. "Assessing Viscoelastic Properties of Polydimethylsiloxane (PDMS) Using Loading and Unloading of the Macroscopic Compression Test." Scholar Commons, 2015. https://scholarcommons.usf.edu/etd/5480.

Full text
Abstract:
Polydimethylsiloxane (PDMS) mechanical properties were measured using custom-built compression test device. PDMS elastic modulus can be varied with the elastomer base to the curing agent ratio, i.e. by changing the cross-linking density. PDMS samples with different crosslink density in terms of their elastic modulus were measured. In this project the PDMS samples with the base/curing agent ratio ranging from 5:1 to 20:1 were tested. The elastic modulus varied with the amount of the crosslinker, and ranged from 0.8 MPa to 4.44 MPa. The compression device was modified by adding digital displacement gauges to measure the lateral strain of the sample, which allowed obtaining the true stress-strain data. Since the unloading behavior was different than the loading behavior of the viscoelastic PDMS, it was utilized to asses viscoelastic properties of the polymer. The thesis describes a simple method for measuring mechanical properties of soft polymeric materials.
APA, Harvard, Vancouver, ISO, and other styles
28

ITO, Hideo, and Gang ZENG. "Low-Cost IP Core Test Using Tri-Template-Based Codes." Institute of Electronics, Information and Communication Engineers, 2007. http://hdl.handle.net/2237/15029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yahyaoui, Imen. "Contribution au suivi par émission acoustique de l'endommagement des structures multi-matériaux à base de bois." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30266/document.

Full text
Abstract:
Aujourd'hui les structures multi-matériaux à base de bois sont en plein essor. Ces structures mixtes sont à la fois originales et mécaniquement prometteuses. En revanche, leur utilisation est encore récente. Cela se traduit par une certaine méconnaissance de leur comportement et notamment vis-à-vis de la présence des endommagements pouvant conduire à la dégradation de leurs propriétés mécaniques. Dans ce cadre, l'émission acoustique peut être une alternative pour l'inspection et le contrôle de ces structures. Afin de caractériser l'évolution de l'endommagement dans les structures multi-matériaux, il est indispensable de commencer par la caractérisation de l'endommagement de chaque matériau isolé. Le travail de recherche présenté dans ce document porte donc sur la caractérisation par émission acoustique de l'endommagement du matériau principal de la structure qui est le matériau bois. L'une des difficultés associée à son suivi par émission acoustique tient en particulier à la variation de sa réponse acoustique selon la structure du matériau, le type d'essence et la sollicitation appliquée. Dans cette étude, à partir de trois types d'essais mécaniques normalisés (traction, compression et flexion), l'endommagement de trois essences de bois (Douglas, sapin pectiné et peuplier) a été caractérisé par la technique d'émission acoustique. Les résultats obtenus montrent que l'émission acoustique est une technique performante pour la détection précoce de l'endommagement du matériau bois. Elle permet également d'affiner les scénarios d'endommagement et de différencier les signatures acoustiques des différents mécanismes par le biais d'algorithmes de reconnaissance de forme. En outre, les résultats obtenus ont vérifiés que la réponse acoustique est dépendante de l'essence et du type de sollicitation
The application of wood-concrete-composite hybrid materials in a mechanical structure is increasing day after day. These multi-material structures are both original and mechanically promising. Nevertheless, their use is still recent. This results are in a certain lack of knowledge about their behavior and in particular with regard to the presence of damage which may lead to the degradation of their mechanical properties. In this context, acoustic emission (AE) may be an appropriate non destructive method for the inspection and control of these structures. In order to characterize the evolution of damage in multi-material structures, it is essential to characterize the damage of each constituent material. This work presents the first part of the project for assessment of hybrid structures. It concerns the characterization by acoustic emission of the damage of wood material. One of the difficulties associated with acoustic emission monitoring of wood is the variability and the complexities in its response, because the AE response depends on the structure of the wood specie and the loading condition. In this study, under different mechanical loading (standard tensile, compression and bending tests), damage of three wood species (Douglas fir, Silver fir and poplar) is characterized by the technique of acoustic emission. Results obtained show that the acoustic emission is efficient for the early detection of the damage of the wood material. It also allows to refine the damage scenarios and to differentiate the acoustic signatures of different mechanisms by means of unsupervised pattern recognition algorithms. Moreover, the results confirm that the acoustic response is dependent on the wood specie but also on the loading condition
APA, Harvard, Vancouver, ISO, and other styles
30

Bicer, Gokhan. "Experimental And Numerical Analysis Of Compression On A Forging Press." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612155/index.pdf.

Full text
Abstract:
Forging is a metal forming process which involves non-linear deformations. Finite element and finite volume software programs are commonly used to simulate the process. In these simulations, material properties are required. However, stress-strain relations of the materials at some elevated temperatures are not available in the material libraries of the related software programs. In this study, the stress-strain curves have been obtained by applying the Cook and Larke Simple Compression Test to AISI 1045 steel at several temperatures on a forging press with a capacity of 1000 tons. The stress-strain curves have also been determined by simulating the processes in a commercial finite element software. It is observed that experimental results are consistent with the numerical ones. A modular die set has been designed and manufactured to conduct the Cook and Larke Simple Compression Test. It has been shown that the forging press with data acquisition system can be used as a material testing equipment to obtain stress-strain curves.
APA, Harvard, Vancouver, ISO, and other styles
31

Gidrão, Salmen Saleme. "Avaliação experimental do grau de confiabilidade dos ensaios à compressão do concreto efetivados em laboratórios." Universidade Federal de Uberlândia, 2014. https://repositorio.ufu.br/handle/123456789/14209.

Full text
Abstract:
The measurements of a physical quantity invariably involve errors and uncertainties. The results of the testing of compressive strength of concrete are not exempt from this rule. Measure is an act of comparison whose degree of accuracy can depend on instruments, operators and the measurement process itself. In this work were analyzed questions that involving the intervening factors of the quality of the concrete compressive results, and, been tested the trustworthiness with which these assays have been produced by several laboratories. The focus is on the measurement errors. Your organization has involved a conceptual review of \"quality\" and its relation to the constructions in concrete; sequentially, has organized an application of tests to verify the trustworthiness of their results through two complementary ways. The first, to analyze the dispersion of results by different methods; and the main form of the reference test, established from a result set as the default method; and the other in order to characterize the types of errors produced. Their results, irrefutable regarding the methodology used for the production of his bodies of evidence and significant for the strategy used for data searching allowed to identify an undesirable state for conditions that defined the level of its reliability. Classified as inconsistent, a considerable number of laboratories evaluated in three different stages of experimental verification, presented as the results of their measurement, inadequate numbers to the strength of concrete, not meeting the expectations of desirable accuracy for this important procedure of quality control production.
As medições de uma grandeza física invariavelmente envolvem erros e incertezas. Os resultados de um ensaio de resistência à compressão do concreto, não estão livres desta regra. Medir é um ato de comparação cujo grau de precisão pode depender de instrumentos, operadores e do próprio processo de medida. Neste trabalho foram analisadas questões que envolvem os fatores intervenientes da qualidade dos resultados dos ensaios de compressão do concreto, e avaliado o grau de confiabilidade dos ensaios realizados por diversos laboratórios. O foco são os erros de medida. Sua organização envolveu uma revisão conceitual sobre qualidade e sua relação com as construções em concreto; na sequência, foi organizada uma aplicação de ensaios para a verificação da confiabilidade dos seus resultados por meio de dois caminhos complementares. O primeiro, para a análise das dispersões de resultados por métodos distintos; e de forma principal pelo método do ensaio referencial, estabelecido a partir de um resultado fixado como padrão; e o outro de maneira a caracterizar os tipos de erros produzidos. Seus resultados, irrefutáveis quanto à metodologia adotada para a produção de seus corpos-deprova, e significativos quanto à estratégia utilizada para a prospecção de dados, permitiram identificar um estado indesejável para as condições que definiram o grau de sua confiabilidade. Classificados como não coerentes, um número considerável dos laboratórios avaliados em três etapas distintas da verificação experimental, apresentaram como resultados de sua medição, números inadequados para a resistência do concreto, não atendendo as expectativas de precisão desejáveis para este importante procedimento do controle de qualidade de sua produção.
Mestre em Engenharia Civil
APA, Harvard, Vancouver, ISO, and other styles
32

Assaf, Mansour Hanna. "Digital core output test data compression architecture based on switching theory concepts: Model implementation and analysis." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29008.

Full text
Abstract:
The design of space-efficient support hardware for built-in self-testing (BIST) is of critical importance in the design and manufacture of VLSI circuits. This dissertation reports new space compression techniques for particular use in digital core based systems which facilitate designing compression networks using compact or pseudorandom test sets, with the target objective of minimizing the storage requirements for the module under test (MUT), while maintaining the fault coverage information. The suggested techniques take advantage of some well known concepts of conventional switching theory, particularly those of cover table and frequency ordering as commonly utilized in the minimization of switching functions, besides knowledge of Hamming distance, sequence weights, and derived sequences in the selection of specific gates for merger of an arbitrary number of output bit streams from the MUT. The outputs coming out of the space compactor may eventually be fed into a time compressor ( viz. syndrome counter) to derive the MUT signatures. The approaches developed to designing zero-aliasing space compressors utilizing additionally concepts of strong and weak compatibilities of response data outputs are novel in the sense that zero-aliasing is achieved without modification of the MUT, while maximal compaction is achieved in most cases in reasonable time utilizing some simple heuristics. The techniques proposed in the dissertation guarantee simple design with a high or full fault coverage for single stuck-line faults, with low CPU simulation time, and acceptable area overhead. Design algorithms are proposed in the dissertation, and the simplicity and ease of their implementations are demonstrated with numerous examples. Specifically, extensive simulation nuns on ISCAS 85 combinational and ISCAS 89 full-scan sequential benchmark circuits with FSIM, ATALANTA, HOPE, and COMPACTEST programs confirm the usefulness of the suggested approaches under conditions of both stochastic independence and dependence of single and multiple fine errors. A performance comparison of the designed space compressors with conventional linear parity tree space compactors as benchmark is also presented in the dissertation, where zero-aliasing is not realized, which demonstrates improved tradeoff for the new circuits between fault coverage and the MUT resources consumed contrasted with existing designs, thereby aiding to fully appreciate the enhancements. However, for the zero-aliasing compactors, advantages are clearly obvious.
APA, Harvard, Vancouver, ISO, and other styles
33

Lemmon, Heber. "Methods for reduced platen compression (RPC) test specimen cutting locations using micro-CT and planar radiographs." Texas A&M University, 2003. http://hdl.handle.net/1969/310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lai, Hung-Kuo, and 賴弘國. "Origami-Based Test Data Compression." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/w2eks3.

Full text
Abstract:
碩士
元智大學
資訊工程學系
107
With the advance of VLSI technology, the increasing in chip density and circuit size have made not only the circuit testing more difficult but also the increasing in the test cost which includes the testing time as well as the test power consumption. Among which testing time is deeply related to the bandwidth of ATE (Automatic Test Equipment). The proposed approach is based on pattern run-length coding to reduce the test data volume as well as testing cost. We use a reverse bit to invers the bits in the reference pattern to increase number of compatible patterns. In addition, we use extend-bits to combine more compatible codewords and to further reduce more test data volume. Experimental results show the proposed approach can reduce test data volume significantly.
APA, Harvard, Vancouver, ISO, and other styles
35

Liao, Yu-De, and 廖育德. "Reducing Test Power in Linear Test Data Compression Schemes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/44315923360483562148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chang, Chih-Ming, and 張志銘. "Test Pattern Compression for Probabilistic Circuit." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/py9z26.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
105
Probabilistic circuits are very attractive for the next generation ultra-low power designs. It is important to test probabilistic circuits because a defect in probabilistic circuit may increase the erroneous probability. However, there is no suitable fault model and test generation/compression technique for probabilistic circuits yet. In this paper, a probabilistic fault model is proposed for probabilistic circuits. The number of faults is linear to the gate count. A statistical method is proposed to calculate the repetition needed for each test pattern. An integer linear programming (ILP) method is presented to minimize total test length, while keeping the same fault coverage. Experiments on ISCAS’89 benchmark circuits show the total test length of our proposed ILP method is 2.77 times shorter than a greedy method.
APA, Harvard, Vancouver, ISO, and other styles
37

Chang-WenChen and 陳昶聞. "Test Compression with Single-Input Data Spreader and Multiple Test Sessions." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ka6k68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

CHANG, YING-HSING, and 張瑛興. "Axial Compression Test of Hollow Circular Column." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/78002616800479541831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jhuang, Jin-Kun, and 莊進琨. "Segmented LFSR Reseeding For Test Data Compression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/43666344224392480070.

Full text
Abstract:
碩士
元智大學
資訊工程學系
104
Test data compression is a popular topic in VLSI testing. It is the key factor that will determine the quality for the final testing results. Built-in-self-test (BIST) is a technique which can test itself and verify the correctness of the circuit under test without any external device involved, resulting in the reduction of test data volume. In this thesis, based on single linear feedback shift register (LFSR) architecture, we proposed to achieve a better test data compression ratio by changing the polynomial and segmenting the test cube. Experimental results show that, we can get great compression ratio by using the method in six larger benchmark ISCAS’89 circuits .
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Lung-Jen, and 李隆仁. "Test Data Compression for Scan-Based Designs." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/20236635742550782954.

Full text
Abstract:
博士
元智大學
資訊工程學系
98
A single-chip SOC design consists of a number of modules and intellectual property (IP) cores where a mass of transistors are used. Although increasing integration produces robust designs, many more faults are created accordingly. To detect them, a large amount of test data with longer scan chains is required. This tendency takes longer testing time due to longer stored pattern used in an SOC. This dissertation focuses on investigating a strategy for test data compression. For this purpose, we propose six compression techniques. These techniques can be classified into three categories: the code-based scheme, the linear-decompressor- based scheme, and the broadcast scheme. For the code-based scheme, we propose three compression methods. The first method encodes runs of variable-length patterns in views of multiple dimensions of pattern information. In the second method, compatible (or inversely compatible) patterns inside a single segment and across multiple segments are both considered in improving the compression effect. Experimental results for the large ISCAS’89 benchmark circuits show that this method can achieve up to 67.64% of average compression ratio. The third method is based on the observation that in a well-sorted test sequence, the Hamming distance between consecutive test vectors are very few. If the position of each difference bit is recorded, the successive test vector can be reconstructed from its precedent test vector. Experimental results show that good compression can be achieved. Besides, good adaptability is also demonstrated for industry-scale circuits. For the linear-decompressor-based scheme, we combine multiple LFSRs to reduce both test data volume and test power consumption simultaneously. Results show that an average reduction up to 87.12% of shifting power and 80.92% of average compression ratio can be achieved for the larger ISCAS’89 benchmarks. For the broadcast scheme, we propose two efficient methods. The first method compresses test data by merging compatible columns in the test set repeatedly, such that test data in the corresponding scan cells can be shared. Experimental results for the large ISCAS’89 benchmark circuits has shown 60.39% of test data volume and 59.75% of test application time can be reduced. The second method is titled as “Cascaded Broadcasting for Test Data Compression”. The basic idea is to repeatedly broadcast the compressed test data to compatible scan chains with a gradually-reduced scope according to the compatibility analysis among scan chains. Experimental results have demonstrated that this method is superior to the recently-proposed ones by other researchers.
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Kai-Chieh, and 楊凱傑. "ATPG and Test Compression for Probabilistic Circuits." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2vm53r.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
106
Probabilistic circuits are gaining importance in the next generation ultra low-power computing and quantum computing. Unlike testing deterministic circuits, where each test pattern is applied only once, testing probabilistic circuits requires multiple pattern repetitions for each test pattern. However, previous test pattern selection techniques require long test length so it is time consuming. In this thesis, we propose an ATPG algorithm for probabilistic circuits. We use specialized activation and propagation methods to reduce pattern repetitions. Also, we propose to accumulate contribution among different patterns to further reduce pattern repetitions. Experiments on ISCAS’89 benchmark circuits show the total test length of our proposed method is 34% shorter than a greedy method [Chang 17].
APA, Harvard, Vancouver, ISO, and other styles
42

Haldigundi, Tapan. "Compression test of aluminium at high temperature." Thesis, 2012. http://ethesis.nitrkl.ac.in/3463/1/FinalThesis.TAPAN_.pdf.

Full text
Abstract:
Compression test of aluminum alloy at high temperature were experimentally carried out on universal testing machine at specified temperatures ranging from 35oC (room temperature) to 225oC and under a constant strain rate of 0.001/s using powdered graphite mixed with machine oil as lubricant throughout the tests. True Stress and strain values were calculated using the engineering equation, which were used to plot the true stress-strain curve for different temperatures, which indicates the mechanical properties of the metal for industrial applications. A common characteristic equation considering true stress, true strain and temperature has been found out using regression analysis .Generalized characteristic equations for each temperature have also been developed by regression analysis, which indicates that the strain hardening exponent first increases, then decreases with increasing temperature, while strength coefficient decreases with increase in temperature.
APA, Harvard, Vancouver, ISO, and other styles
43

田子坤. "An efficient pseudo exhaustive test pattern generation and test response compression method." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/70783714455495152789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Jinkyu. "Low power scan testing and test data compression." Thesis, 2006. http://hdl.handle.net/2152/2568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Baltaji, Najad Borhan. "Scan test data compression using alternate Huffman coding." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5615.

Full text
Abstract:
Huffman coding is a good method for statistically compressing test data with high compression rates. Unfortunately, the on-­‐chip decoder to decompress that encoded test data after it is loaded onto the chip may be too complex. With limited die area, the decoder complexity becomes a drawback. This makes Huffman coding not ideal for use in scan data compression. Selectively encoding test data using Huffman coding can provide similarly high compression rates while reducing the complexity of the decoder. A smaller and less complex decoder makes Alternate Huffman Coding a viable option for compressing and decompressing scan test data.
text
APA, Harvard, Vancouver, ISO, and other styles
46

陳徵君. "Stress Analysis of Non-uniform Cylinrical Compression Test." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/44080142203732405348.

Full text
Abstract:
碩士
國立臺灣大學
機械工程學研究所
84
The formation of the methemoglobin with additions of the SOD/CAT show significantly less than that of LEH without SOD/CAT at temperature of 25℃ and 37℃. For example, there is 8-10% less of methemoglobin formed at temperature of 37℃ for LEH with SOD/CAT. By using 20% met-Hb formation as a criteria for LEH, Hb with SOD/CAT can store in 25℃ for three days but only 3.5 hours for 37℃.
APA, Harvard, Vancouver, ISO, and other styles
47

Lu, Chia-Che, and 呂佳哲. "Fibonacci Pattern Run-Length for Test Data Compression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/99478416818766343777.

Full text
Abstract:
碩士
元智大學
資訊工程學系
104
The density of integrated circuits increases lead to the VLSI technology grows up. Therefore, testing for integrated circuit is more and more complex. Many new techniques have been proposed to reduce test data volume so as to save memory cost and improve the transmission efficiency between Tester (ATE) and SOC. One solution to this problem is to use compression techniques to reduce the volume of test data. The proposed thesis use a pattern run-length based compression method considering Fibonacci sequence. Such as pattern length and number of pattern runs is encoded to denote the compression status. Improvements are experimentally demonstrated on larger ISCAS’89 benchmarks using MinTest. Experimental result show that the average compression rate is increased compared with attempts before.
APA, Harvard, Vancouver, ISO, and other styles
48

Yu-Te, Liaw. "A Two-level Test Data Compression and Test Time Reduction Technique for SOC." 2005. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2107200507013700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liaw, Yu-Te, and 廖育德. "A Two-level Test Data Compression and Test Time Reduction Technique for SOC." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/87668965652681971204.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
93
In SOC era, long test time and large test data volume are two serious problems. In this thesis, a two-level test data compression technique is presented to reduce both the test data and the test time for System on a Chip (SOC). The level one compression is achieved by selective-Huffman coding for the entire SOC. The level two compression is achieved by broadcasting test patterns to multiple cores simultaneously. Experiments on the d695 benchmark SOC show that the test data and test time are reduced by 64% and 35%, respectively. This technique requires no change of cores and hence provides a good SOC test integration solution for the SOC assemblers.
APA, Harvard, Vancouver, ISO, and other styles
50

Chakravadhanula, Krishna V. Touba Nur A. "New test vector compression techniques based on linear expansion." 2004. http://repositories.lib.utexas.edu/bitstream/handle/2152/1886/chakravadhanulakv042.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography