Academic literature on the topic 'Data compression (Computer science) – Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data compression (Computer science) – Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data compression (Computer science) – Testing"

1

Permuter, Haim H., Young-Han Kim, and Tsachy Weissman. "Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing." IEEE Transactions on Information Theory 57, no. 6 (June 2011): 3248–59. http://dx.doi.org/10.1109/tit.2011.2136270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jung, Jun-Mo, and Jong-Wha Chong. "Efficient Test Data Compression and Low Power Scan Testing in SoCs." ETRI Journal 25, no. 5 (October 14, 2003): 321–27. http://dx.doi.org/10.4218/etrij.03.0303.0017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruan, Xiaoyu, and Rajendra S. Katti. "Data-Independent Pattern Run-Length Compression for Testing Embedded Cores in SoCs." IEEE Transactions on Computers 56, no. 4 (April 2007): 545–56. http://dx.doi.org/10.1109/tc.2007.1007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Minkin, A. S., O. V. Nikolaeva, and A. A. Russkov. "Hyperspectral data compression based upon the principal component analysis." Computer Optics 45, no. 2 (April 2021): 235–44. http://dx.doi.org/10.18287/2412-6179-co-806.

Full text
Abstract:
The paper is aimed at developing an algorithm of hyperspectral data compression that combines small losses with high compression rate. The algorithm relies on a principal component analysis and a method of exhaustion. The principal components are singular vectors of an initial signal matrix, which are found by the method of exhaustion. A retrieved signal matrix is formed in parallel. The process continues until a required retrieval error is attained. The algorithm is described in detail and input and output parameters are specified. Testing is performed using AVIRIS data (Airborne Visible-Infrared Imaging Spectrometer). Three images of differently looking sky (clear sky, partly clouded sky, and overcast skies) are analyzed. For each image, testing is performed for all spectral bands and for a set of bands from which high water-vapour absorption bands are excluded. Retrieval errors versus compression rates are presented. The error formulas include the root mean square deviation, the noise-to-signal ratio, the mean structural similarity index, and the mean relative deviation. It is shown that the retrieval errors decrease by more than an order of magnitude if spectral bands with high gas absorption are disregarded. It is shown that the reason is that weak signals in the absorption bands are measured with great errors, leading to a weak dependence between the spectra in different spatial pixels. A mean cosine distance between the spectra in different spatial pixels is suggested to be used to assess the image compressibility.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, L., K. Chakrabarty, S. Kajihara, and S. Swaminathan. "Three-stage compression approach to reduce test data volume and testing time for IP cores in SOCs." IEE Proceedings - Computers and Digital Techniques 152, no. 6 (2005): 704. http://dx.doi.org/10.1049/ip-cdt:20045150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, He, Fatick Nath, Prathmesh Naik Parrikar, and Mehdi Mokhtari. "Analyzing the Validity of Brazilian Testing Using Digital Image Correlation and Numerical Simulation Techniques." Energies 13, no. 6 (March 19, 2020): 1441. http://dx.doi.org/10.3390/en13061441.

Full text
Abstract:
Characterizing the mechanical behavior of rocks plays a crucial role to optimize the fracturing process in unconventional reservoirs. However, due to the intrinsic anisotropy and heterogeneity in unconventional resources, fracture process prediction remains the most significant challenge for sustainable and economic hydrocarbon production. During the deformation tracking under compression, deploying conventional methods (strain gauge, extensometer, etc.) is insufficient to measure the deformation since the physical attachment of the device is restricted to the size of the sample, monitoring limited point-wise deformation, producing difficulties in data retrieval, and a tendency to lose track in failure points, etc. Where conventional methods are limited, the application of digital image correlation (DIC) provides detailed and additional information of strain evolution and fracture patterns under loading. DIC is an image-based optical method that records an object with a camera and monitors the random contrast speckle pattern painted on the facing surface of the specimen. To overcome the existing limitations, this paper presents numerical modeling of Brazilian disc tests under quasi-static conditions to understand the full-field deformation behaviors and finally, it is validated by DIC. As the direct tensile test has limitations in sample preparation and test execution, the Brazilian testing principle is commonly used to evaluate indirectly the tensile strength of rocks. The two-dimensional numerical model was built to predict the stress distribution and full-field deformation on Brazilian disc under compression based on the assumptions of a homogenous, isotropic and linear elastic material. The uniaxial compression test was conducted using the DIC technique to determine the elastic properties of Spider Berea sandstone, which were used as inputs for the simulation model. The model was verified by the analytical solution and compared with the digital image correlation. The numerical simulation results showed that the solutions matched reasonably with the analytical solutions where the maximum deviation of stress distribution was obtained as 14.59%. The strain evolution (normal and shear strains) and displacements along the central horizontal and vertical planes were investigated in three distinguishable percentages of peak loads (20%, 40%, and 90%) to understand the deformation behaviors in rock. The simulation results demonstrated that the strain evolution contours consistently matched with DIC generated contours with a reasonable agreement. The changes in displacement along the central horizontal and vertical planes showed that numerical simulation and DIC generated experimental results were repeatable and matched closely. In terms of validation, Brazilian testing to measure the indirect tensile strength of rocks is still an issue of debate. The numerical model of fracture propagation supported by digital image correlation from this study can be used to explain the fracturing process in the homogeneous material and can be extended to non-homogeneous cases by incorporating heterogeneity, which is essential for rock mechanics field applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Meng, Qingbin, Yanlong Chen, Mingwei Zhang, Lijun Han, Hai Pu, and Jiangfeng Liu. "On the Kaiser Effect of Rock under Cyclic Loading and Unloading Conditions: Insights from Acoustic Emission Monitoring." Energies 12, no. 17 (August 23, 2019): 3255. http://dx.doi.org/10.3390/en12173255.

Full text
Abstract:
The Kaiser effect reflects the memory of the loaded rock to the irreversible damage and deformation. The stress level, loading rate and lithology are the main factors affecting the Kaiser effect of the rock. To identify the accurate stress point of the Kaiser effect, the MTS 816 rock mechanics testing system and the DS5-A acoustic emission testing and analysis system were adopted. The uniaxial cyclic loading–unloading and acoustic emission characteristic test of 90 rock specimens from three types of rocks under different stress level and loading rate was carried out. The evolution of acoustic emission under uniaxial compression of the rock corresponds to the compaction stage, elastic stage, yield stage and post-peak stress drop stage of the rock deformation and failure process and is divided into the quiet period, transition period, active period and decay period of the acoustic emission. The larger the hardness of rock is, the earlier the stress point of the Kaiser effect appears. The loading stress level (σA) has appreciable influence on the Kaiser effect of the rock. When σA ≥ 0.7σc, the Kaiser effect disappears. Usually, the dilatancy stress (crack initiation stress) does not exceed 70% of the uniaxial compressive strength (σc) of the rock, and the stress point can be the threshold to determine whether the Kaiser effect occurs. The influence of loading rate (lr) on Felicity rate (FR) is relatively large when lr < 0.01 mm/s, and FR rapidly grows with increase of the loading rate. When lr ≥ 0.01 mm/s, the influence of the loading rate on FR is relatively small. The findings facilitate the future application of the Kaiser effect and improvement of the accuracy of the acoustic emission data interpretation.
APA, Harvard, Vancouver, ISO, and other styles
8

Cican, Grigore, Marius Deaconu, and Daniel-Eugeniu Crunteanu. "Impact of Using Chevrons Nozzle on the Acoustics and Performances of a Micro Turbojet Engine." Applied Sciences 11, no. 11 (June 2, 2021): 5158. http://dx.doi.org/10.3390/app11115158.

Full text
Abstract:
This paper presents a study regarding the noise reduction of the turbojet engine, in particular the jet noise of a micro turbojet engine. The results of the measurement campaign are presented followed by a performances analysis which is based on the measured data by the test bench. Within the tests, beside the baseline nozzle other two nozzles with chevrons were tested and evaluated. First type of nozzle is foreseen with eight triangular chevrons, the length of the chevrons being L = 10 percentages from the equivalent diameter and an immersion angle of I = 0 deg. For the second nozzle the length and the immersion angle were maintained, only the chevrons number were increased at 16. The micro turbojet engine has been tested at four different regimes of speed. The engine performances were monitored by measuring the fuel flow, the temperature in front of the turbine, the intake air flow, the compression ratio, the propulsion force and the temperature before the compressor. In addition, during the testing, the vibrations were measured on axial and radial direction which indicate a normal functioning of the engine during the chevron nozzles testing. Regarding the noise, it was concluded that at low regimes the noise doesn’t presents any reduction when using the chevron nozzles, while at high regimes an overall noise reduction of 2–3 dB(A) was achieved. Regarding the engine performances, a decrease in the temperature in front of the turbine, compression ratio and the intake air and fuel flow was achieved and also a drop of few percent of the propulsion force.
APA, Harvard, Vancouver, ISO, and other styles
9

Santana, Teresa, João Gonçalves, Fernando Pinho, and Rui Micaelo. "Effects of the Ratio of Porosity to Volumetric Cement Content on the Unconfined Compressive Strength of Cement Bound Fine Grained Soils." Infrastructures 6, no. 7 (June 26, 2021): 96. http://dx.doi.org/10.3390/infrastructures6070096.

Full text
Abstract:
This paper presents an experimental investigation into the effects of porosity, dry density and cement content on the unconfined compressive strength and modulus of elasticity of cement-bound soil mixtures. A clayey sand was used with two different proportions of type IV Portland cement, 10% and 14% of the dry mass of the soil. Specimens were moulded with the same water content but using four different compaction efforts, corresponding to four different dry densities. Unconfined compression testing was conducted at seven days of curing time on unsoaked samples. The results showed that the compressive strength increased with the increase in cement content and with the decrease in porosity. From the experimental data, a unique relationship was found between the unconfined compressive strength and the ratio of porosity to volumetric cement content for all the mixtures and compaction efforts tested. The equation developed demonstrates that it is possible to estimate the amount of cement and the dry density to achieve a certain level of unconfined compressive strength. A normalized general equation was also found to fit other authors’ results for similar soils mixed with cement. From this, a cement-bound soil model was proposed for the development of a mixing design procedure for different soils.
APA, Harvard, Vancouver, ISO, and other styles
10

Galajda, Pavol, Alena Galajdova, Stanislav Slovak, Martin Pecovsky, Milos Drutarovsky, Marek Sukop, and Ihab BA Samaneh. "Robot vision ultra-wideband wireless sensor in non-cooperative industrial environments." International Journal of Advanced Robotic Systems 15, no. 4 (July 1, 2018): 172988141879576. http://dx.doi.org/10.1177/1729881418795767.

Full text
Abstract:
In this article, the ultra-wideband technology for localization and tracking of the robot gripper (behind the obstacles) in industrial environments is presented. We explore the possibilities of ultra-wideband radar sensor network employing the centralized data fusion method that can significantly improve tracking capabilities in a complex environment. In this article, we present ultra-wideband radar sensor network hardware demonstrator that uses a new wireless ultra-wideband sensor with an embedded controller to detect and track online or off-line movement of the robot gripper. This sensor uses M-sequence ultra-wideband radars front-end and low-cost powerful processors on a system on chip with the advanced RISC machines (ARM) architecture as a main signal processing block. The ARM-based single board computer ODROID-XU4 platform used in our ultra-wideband sensor can provide processing power for the preprocessing of received raw radar signals, algorithms for detection and estimation of target’s coordinates, and finally, compression of data sent to the data fusion center. Data streams of compressed target coordinates are sent from each sensor node to the data fusion center in the central node using standard the wireless local area network (WLAN) interface that is the feature of the ODROID-XU4 platform. The article contains experimental results from measurements where sensors and antennas are located behind the wall or opaque material. Experimental testing confirmed capability of real-time performance of developed ultra-wideband radar sensor network hardware and acceptable precision of software. The introduced modular architecture of ultra-wideband radar sensor network can be used for fast development and testing of new real-time localization and tracking applications in industrial environments.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data compression (Computer science) – Testing"

1

Persson, Jon. "Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2855.

Full text
Abstract:

Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware.

Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods.

A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.

APA, Harvard, Vancouver, ISO, and other styles
2

Steinruecken, Christian. "Lossless data compression." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barr, Kenneth C. (Kenneth Charles) 1978. "Energy aware lossless data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Deng, Mo Ph D. Massachusetts Institute of Technology. "On compression of encrypted data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106100.

Full text
Abstract:
Thesis: S.M. in Electrical Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-96).
In this thesis, I took advantage of a model-free compression architecture, where the encoder only makes decision about coding and leaves to the decoder to apply the knowledge of the source for decoding, to attack the problem of compressing encrypted data. Results for compressing different sources encrypted by different class of ciphers are shown and analyzed. Moreover, we generalize the problem from encryption schemes to operations, or data-processing techniques. We try to discover key properties an operation should have, in order to enable good post-operation compression performances.
by Mo Deng.
S.M. in Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Joshua Ka-Wing. "A model-adaptive universal data compression architecture with applications to image compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111868.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-61).
In this thesis, I designed and implemented a model-adaptive data compression system for the compression of image data. The system is a realization and extension of the Model-Quantizer-Code-Separation Architecture for universal data compression which uses Low-Density-Parity-Check Codes for encoding and probabilistic graphical models and message-passing algorithms for decoding. We implement a lossless bi-level image data compressor as well as a lossy greyscale image compressor and explain how these compressors can rapidly adapt to changes in source models. We then show using these implementations that Restricted Boltzmann Machines are an effective source model for compressing image data compared to other compression methods by comparing compression performance using these source models on various image datasets.
by Joshua Ka-Wing Lee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Toufie, Moegamat Zahir. "Real-time loss-less data compression." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/1367.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Technikon, Cape Town, 2000
Data stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a pair. To encode the position portion of the pair, we make use of a sliding scale method. The sliding scale method works as follows. Let the position in the input buffer, of the current character to be compressed be held by inpos, where inpos is initially set to 3. It is then only possible for a match to occur at position 1 or 2. Hence the position of a match will never be greater than 2, and therefore the position portion can be encoded using only 1 bit. As "inpos" is incremented as each character is encoded, the match position range increases and therefore more bits will be required to encode the match position. The reason why a decimal 2 can be encoded 'sing only I bit can be explained as follows. When decimal values are converted to binary values, we get 010 = 02, 110 = 12, 210, = 102etc. As a position of 0 will never be used, it is possible to develop a coding scheme where a decimal value of 1 can be represented by a binary value of 0, and a decimal value of 2 can be represented by binary value of 1. Only I bit is therefore needed to encode match position I and match position 2. In general. any decimal value n ca:) be represented by the binary equivalent for (n - 1). The number of bits needed to encode (n - 1), indicates the number of bits needed to encode the match position. The length portion of the pair is encoded using a variable length coding (vlc) approach. The vlc method performs its encoding by using binary blocks. The first binary block is 3 bits long, where binary values 000 through 110 represent decimal values I through 7.
APA, Harvard, Vancouver, ISO, and other styles
7

Aggarwal, Viveka. "Lossless Data Compression for Security Purposes Using Huffman Encoding." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1456848208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cabrera-Mercader, Carlos R. (Carlos Rubén). "Robust compression of multispectral remote sensing data." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9338.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 241-246).
This thesis develops efficient and robust non-reversible coding algorithms for multispectral remote sensing data. Although many efficient non-reversible coding algorithms have been proposed for such data, their application is often limited due to the risk of excessively degrading the data if, for example, changes in sensor characteristics and atmospheric/surface statistics occur. On the other hand, reversible coding algorithms are inherently robust to variable conditions but they provide only limited compression when applied to data from most modern remote sensors. The algorithms developed in this work achieve high data compression by preserving only data variations containing information about the ideal, noiseless spectrum, and by exploiting inter-channel correlations in the data. The algorithms operate on calibrated data modeled as the sum of the ideal spectrum, and an independent noise component due to sensor noise, calibration error, and, possibly, impulsive noise. Coding algorithms are developed for data with and without impulsive noise. In both cases an estimate of the ideal spectrum is computed first, and then that estimate is coded efficiently. This estimator coder structure is implemented mainly using data-dependent matrix operators and scalar quantization. Both coding algorithms are robust to slow instrument drift, addressed by appropriate calibration, and outlier channels. The outliers are preserved by separately coding the noise estimates in addition to the signal estimates so that they may be reconstructed at the original resolution. In addition, for data free of impulsive noise the coding algorithm adapts to changes in the second-order statistics of the data by estimating those statistics from each block of data to be coded. The coding algorithms were tested on data simulated for the NASA 2378-channel Atmospheric Infrared Sounder (AIRS). Near-lossless compression ratios of up to 32:1 (0.4 bits/pixel/channel) were obtained in the absence of impulsive noise, without preserving outliers, and assuming the nominal noise covariance. An average noise variance reduction of 12-14 dB was obtained simultaneously for data blocks of 2400-7200 spectra. Preserving outlier channels for which the noise estimates exceed three times the estimated noise rms value would require no more than 0.08 bits/pixel/channel provided the outliers arise from the assumed noise distribution. If contaminant outliers occurred, higher bit rates would be required. Similar performance was obtained for spectra corrupted by few impulses.
by Carlos R. Cabrera-Mercader.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Lehman, Eric (Eric Allen) 1970. "Approximation algorithms for grammar-based data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87172.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 109-113).
This thesis considers the smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. We show that this problem is intractable, and so our objective is to find approximation algorithms. This simple question is connected to many areas of research. Most importantly, there is a link to data compression; instead of storing a long string, one can store a small grammar that generates it. A small grammar for a string also naturally brings out underlying patterns, a fact that is useful, for example, in DNA analysis. Moreover, the size of the smallest context-free grammar generating a string can be regarded as a computable relaxation of Kolmogorov complexity. Finally, work on the smallest grammar problem qualitatively extends the study of approximation algorithms to hierarchically-structured objects. In this thesis, we establish hardness results, evaluate several previously proposed algorithms, and then present new procedures with much stronger approximation guarantees.
by Eric Lehman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Koutsogiannis, Vassilis. "A study of color image data compression /." Online version of thesis, 1992. http://hdl.handle.net/1850/11060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data compression (Computer science) – Testing"

1

Giovanni, Motta, ed. Handbook of data compression. 5th ed. London: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Bormin. Satellite data compression. New York, NY: Springer Science+Business Media, LLC, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elements of data compression. Pacific Grove, CA: Brooks/Cole Thomson Learning, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jean-Loup, Gailly, ed. The data compression book. 2nd ed. New York: M&T Books, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Salomon, David. Data Compression: The Complete Reference. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mark, Nelson. The data compression book: Featuring fast, efficient data compression techniques in C. Redwood City, CA: M&T Books, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Ross N. Adaptive Data Compression. Boston: Kluwer Academic Publishers, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bell, Timothy C. Text compression. Englewood Cliffs, N.J: Prentice Hall, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lynch, Thomas J. Data compression: Techniques and applications. New York: Van Nostrand Reinhold, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Data compression techniques and applications. Belmont, Calif: Lifetime Learning Publications, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Data compression (Computer science) – Testing"

1

Weik, Martin H. "data compression." In Computer Science and Communications Dictionary, 344. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_4231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weik, Martin H. "facsimile data compression." In Computer Science and Communications Dictionary, 565. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_6732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Crochemore, Maxime. "Data compression with substitution." In Lecture Notes in Computer Science, 1–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51465-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Adriaans, Pieter. "Learning as Data Compression." In Lecture Notes in Computer Science, 11–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73001-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Wei Qi. "Surveillance Data Capturing and Compression." In Texts in Computer Science, 23–44. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-10713-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Prílepok, Michal, Jan Platos, and Vaclav Snasel. "Similarity Based on Data Compression." In Lecture Notes in Computer Science, 267–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45111-9_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Revankar, P. S., Vijay B. Patil, and W. Z. Gandhare. "Data Compression on Embedded System." In Communications in Computer and Information Science, 535–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12214-9_95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Youtao, and Rajiv Gupta. "Data Compression Transformations for Dynamically Allocated Data Structures." In Lecture Notes in Computer Science, 14–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45937-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hübbe, Nathanael, Al Wegener, Julian Martin Kunkel, Yi Ling, and Thomas Ludwig. "Evaluating Lossy Compression on Climate Data." In Lecture Notes in Computer Science, 343–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38750-0_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reznik, Yuriy A., and Anatoly V. Anisimov. "Using Tries for Universal Data Compression." In Mathematics and Computer Science III, 199–200. Basel: Birkhäuser Basel, 2004. http://dx.doi.org/10.1007/978-3-0348-7915-6_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data compression (Computer science) – Testing"

1

Zhang, Ling, and Ji-shun Kuang. "Test-data compression using hybrid prefix encoding for testing embedded cores." In 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccsit.2010.5564956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rabuzin, K. "Deductive data warehouses: testing performances." In International Conference on Computer Science and Systems Engineering. Southampton, UK: WIT Press, 2015. http://dx.doi.org/10.2495/csse140241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jia Li, Xiao Liu, Yubin Zhang, Yu Hu, Xiaowei Li, and Qiang Xu. "On capture power-aware test data compression for scan-based testing." In 2008 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). IEEE, 2008. http://dx.doi.org/10.1109/iccad.2008.4681553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Derong, and Jianfeng Hu. "Data Flow-Based Software Testing." In 2008 International Conference on Computer Science and Software Engineering. IEEE, 2008. http://dx.doi.org/10.1109/csse.2008.161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kattan, Ahmed. "Universal intelligent data compression systems: A review." In 2010 2nd Computer Science and Electronic Engineering Conference (CEEC). IEEE, 2010. http://dx.doi.org/10.1109/ceec.2010.5606482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blachnik, Marcin, Mirosław Kordos, and Sławomir Golak. "Data Compression Measures for Meta-Learning Systems." In 2018 Federated Conference on Computer Science and Information Systems. IEEE, 2018. http://dx.doi.org/10.15439/2018f87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Punn, Narinder Singh, Sonali Agarwal, M. Syafrullah, and Krisna Adiyarta. "Testing Big Data Application." In 2019 6th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI). IEEE, 2019. http://dx.doi.org/10.23919/eecsi48112.2019.8976972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yuzhen, Takashi Imaizumi, and Jihong Guan. "Spatial Data Compression Techniques for GML." In 2008 Japan-China Joint Workshop on Frontier of Computer Science and Technology (FCST). IEEE, 2008. http://dx.doi.org/10.1109/fcst.2008.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Babu, K. Ashok, and V. Satish Kumar. "Implementation of data compression using Huffman coding." In 2010 International Conference on Methods and Models in Computer Science (ICM2CS 2010). IEEE, 2010. http://dx.doi.org/10.1109/icm2cs.2010.5706721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Fenghua. "Haptic data compression based on quadratic curve prediction." In 2012 IEEE International Conference on Computer Science and Automation Engineering (CSAE). IEEE, 2012. http://dx.doi.org/10.1109/csae.2012.6272682.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data compression (Computer science) – Testing"

1

Henrick, Erin, Steven McGee, Lucia Dettori, Troy Williams, Andrew Rasmussen, Don Yanek, Ronald Greenberg, and Dale Reed. Research-Practice Partnership Strategies to Conduct and Use Research to Inform Practice. The Learning Partnership, April 2021. http://dx.doi.org/10.51420/conf.2021.3.

Full text
Abstract:
This study examines the collaborative processes the Chicago Alliance for Equity in Computer Science (CAFÉCS) uses to conduct and use research. The CAFÉCS RPP is a partnership between Chicago Public Schools (CPS), Loyola University Chicago, The Learning Partnership, DePaul University, and University of Illinois at Chicago. Data used in this analysis comes from three years of evaluation data, and includes an analysis of team documents, meeting observations, and interviews with 25 members of the CAFÉCS RPP team. The analysis examines how three problems are being investigated by the partnership: 1) student failure rate in an introductory computer science course, 2) teachers’ limited use of discussion techniques in an introductory computer science class, and 3) computer science teacher retention. Results from the analysis indicate that the RPP engages in a formalized problem-solving cycle. The problem-solving cycle includes the following steps: First, the Office of Computer Science (OCS) identifies a problem. Next, the CAFÉCS team brainstorms and prioritizes hypotheses to test. Next, data analysis clarifies the problem and the research findings are shared and interpreted by the entire team. Finally, the findings are used to inform OCS improvement strategies and next steps for the CAFÉCS research agenda. There are slight variations in the problem-solving cycle, depending on the stage of understanding of the problem, which has implications for the mode of research (e.g hypothesis testing, research and design, continuous improvement, or evaluation).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography