To see the other types of publications on this topic, follow the link: Data compression (Computer science) – Testing.

Dissertations / Theses on the topic 'Data compression (Computer science) – Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data compression (Computer science) – Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Persson, Jon. "Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2855.

Full text
Abstract:

Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware.

Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods.

A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.

APA, Harvard, Vancouver, ISO, and other styles
2

Steinruecken, Christian. "Lossless data compression." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barr, Kenneth C. (Kenneth Charles) 1978. "Energy aware lossless data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Deng, Mo Ph D. Massachusetts Institute of Technology. "On compression of encrypted data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106100.

Full text
Abstract:
Thesis: S.M. in Electrical Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-96).
In this thesis, I took advantage of a model-free compression architecture, where the encoder only makes decision about coding and leaves to the decoder to apply the knowledge of the source for decoding, to attack the problem of compressing encrypted data. Results for compressing different sources encrypted by different class of ciphers are shown and analyzed. Moreover, we generalize the problem from encryption schemes to operations, or data-processing techniques. We try to discover key properties an operation should have, in order to enable good post-operation compression performances.
by Mo Deng.
S.M. in Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Joshua Ka-Wing. "A model-adaptive universal data compression architecture with applications to image compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111868.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-61).
In this thesis, I designed and implemented a model-adaptive data compression system for the compression of image data. The system is a realization and extension of the Model-Quantizer-Code-Separation Architecture for universal data compression which uses Low-Density-Parity-Check Codes for encoding and probabilistic graphical models and message-passing algorithms for decoding. We implement a lossless bi-level image data compressor as well as a lossy greyscale image compressor and explain how these compressors can rapidly adapt to changes in source models. We then show using these implementations that Restricted Boltzmann Machines are an effective source model for compressing image data compared to other compression methods by comparing compression performance using these source models on various image datasets.
by Joshua Ka-Wing Lee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Toufie, Moegamat Zahir. "Real-time loss-less data compression." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/1367.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Technikon, Cape Town, 2000
Data stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a pair. To encode the position portion of the pair, we make use of a sliding scale method. The sliding scale method works as follows. Let the position in the input buffer, of the current character to be compressed be held by inpos, where inpos is initially set to 3. It is then only possible for a match to occur at position 1 or 2. Hence the position of a match will never be greater than 2, and therefore the position portion can be encoded using only 1 bit. As "inpos" is incremented as each character is encoded, the match position range increases and therefore more bits will be required to encode the match position. The reason why a decimal 2 can be encoded 'sing only I bit can be explained as follows. When decimal values are converted to binary values, we get 010 = 02, 110 = 12, 210, = 102etc. As a position of 0 will never be used, it is possible to develop a coding scheme where a decimal value of 1 can be represented by a binary value of 0, and a decimal value of 2 can be represented by binary value of 1. Only I bit is therefore needed to encode match position I and match position 2. In general. any decimal value n ca:) be represented by the binary equivalent for (n - 1). The number of bits needed to encode (n - 1), indicates the number of bits needed to encode the match position. The length portion of the pair is encoded using a variable length coding (vlc) approach. The vlc method performs its encoding by using binary blocks. The first binary block is 3 bits long, where binary values 000 through 110 represent decimal values I through 7.
APA, Harvard, Vancouver, ISO, and other styles
7

Aggarwal, Viveka. "Lossless Data Compression for Security Purposes Using Huffman Encoding." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1456848208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cabrera-Mercader, Carlos R. (Carlos Rubén). "Robust compression of multispectral remote sensing data." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9338.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 241-246).
This thesis develops efficient and robust non-reversible coding algorithms for multispectral remote sensing data. Although many efficient non-reversible coding algorithms have been proposed for such data, their application is often limited due to the risk of excessively degrading the data if, for example, changes in sensor characteristics and atmospheric/surface statistics occur. On the other hand, reversible coding algorithms are inherently robust to variable conditions but they provide only limited compression when applied to data from most modern remote sensors. The algorithms developed in this work achieve high data compression by preserving only data variations containing information about the ideal, noiseless spectrum, and by exploiting inter-channel correlations in the data. The algorithms operate on calibrated data modeled as the sum of the ideal spectrum, and an independent noise component due to sensor noise, calibration error, and, possibly, impulsive noise. Coding algorithms are developed for data with and without impulsive noise. In both cases an estimate of the ideal spectrum is computed first, and then that estimate is coded efficiently. This estimator coder structure is implemented mainly using data-dependent matrix operators and scalar quantization. Both coding algorithms are robust to slow instrument drift, addressed by appropriate calibration, and outlier channels. The outliers are preserved by separately coding the noise estimates in addition to the signal estimates so that they may be reconstructed at the original resolution. In addition, for data free of impulsive noise the coding algorithm adapts to changes in the second-order statistics of the data by estimating those statistics from each block of data to be coded. The coding algorithms were tested on data simulated for the NASA 2378-channel Atmospheric Infrared Sounder (AIRS). Near-lossless compression ratios of up to 32:1 (0.4 bits/pixel/channel) were obtained in the absence of impulsive noise, without preserving outliers, and assuming the nominal noise covariance. An average noise variance reduction of 12-14 dB was obtained simultaneously for data blocks of 2400-7200 spectra. Preserving outlier channels for which the noise estimates exceed three times the estimated noise rms value would require no more than 0.08 bits/pixel/channel provided the outliers arise from the assumed noise distribution. If contaminant outliers occurred, higher bit rates would be required. Similar performance was obtained for spectra corrupted by few impulses.
by Carlos R. Cabrera-Mercader.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Lehman, Eric (Eric Allen) 1970. "Approximation algorithms for grammar-based data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87172.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 109-113).
This thesis considers the smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. We show that this problem is intractable, and so our objective is to find approximation algorithms. This simple question is connected to many areas of research. Most importantly, there is a link to data compression; instead of storing a long string, one can store a small grammar that generates it. A small grammar for a string also naturally brings out underlying patterns, a fact that is useful, for example, in DNA analysis. Moreover, the size of the smallest context-free grammar generating a string can be regarded as a computable relaxation of Kolmogorov complexity. Finally, work on the smallest grammar problem qualitatively extends the study of approximation algorithms to hierarchically-structured objects. In this thesis, we establish hardness results, evaluate several previously proposed algorithms, and then present new procedures with much stronger approximation guarantees.
by Eric Lehman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Koutsogiannis, Vassilis. "A study of color image data compression /." Online version of thesis, 1992. http://hdl.handle.net/1850/11060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bunton, Suzanne. "On-line stochastic processes in data compression /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Otten, Frederick John. "Using semantic knowledge to improve compression on log files." Thesis, Rhodes University, 2008. http://eprints.ru.ac.za/1660/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Ying. "Turbo codes for data compression and joint source-channel coding." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 112 p, 2007. http://proquest.umi.com/pqdlink?did=1251904871&Fmt=7&clientId=79356&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sestok, Charles K. (Charles Kasimer). "Data selection in binary hypothesis testing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16613.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (p. 119-123).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Traditionally, statistical signal processing algorithms are developed from probabilistic models for data. The design of the algorithms and their ultimate performance depend upon these assumed models. In certain situations, collecting or processing all available measurements may be inefficient or prohibitively costly. A potential technique to cope with such situations is data selection, where a subset of the measurements that can be collected and processed in a cost-effective manner is used as input to the signal processing algorithm. Careful evaluation of the selection procedure is important, since the probabilistic description of distinct data subsets can vary significantly. An algorithm designed for the probabilistic description of a poorly chosen data subset can lose much of the potential performance available to a well-chosen subset. This thesis considers algorithms for data selection combined with binary hypothesis testing. We develop models for data selection in several cases, considering both random and deterministic approaches. Our considerations are divided into two classes depending upon the amount of information available about the competing hypotheses. In the first class, the target signal is precisely known, and data selection is done deterministically. In the second class, the target signal belongs to a large class of random signals, selection is performed randomly, and semi-parametric detectors are developed.
by Charles K. Sestok, IV.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
15

Weston, Bron O. Duren Russell Walker Thompson Michael Wayne. "Data compression application to the MIL-STD 1553 avionics data bus." Waco, Tex. : Baylor University, 2005. http://hdl.handle.net/2104/2882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Todd, Martin Peter. "Image data compression based on a multiresolution signal model." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/100937/.

Full text
Abstract:
Image data compression is an important topic within the general field of image processing. It has practical applications varying from medical imagery to video telephones, and provides significant implications for image modelling theory. In this thesis a new class of linear signal models, linear interpolative multiresolution models, is presented and applied to the data compression of a range of natural images. The key property of these models is that whilst they are non- causal in the two spatial dimensions they are causal in a third dimension, the scale dimension. This leads to computationally efficient predictors which form the basis of the data compression algorithms. Models of varying complexity are presented, ranging from a simple stationary form to one which models visually important features such as lines and edges in terms of scale and orientation. In addition to theoretical results such as related rate distortion functions, the results of applying the compression algorithms to a variety of images are presented. These results compare favourably, particularly at high compression ratios, with many of the techniques described in the literature, both in terms of mean squared quantisation noise and more meaningfully, in terms of perceived visual quality. In particular the use of local orientation over various scales within the consistent spatial interpolative framework of the model significantly reduces perceptually important distortions such as the blocking artefacts often seen with high compression coders. A new algorithm for fast computation of the orientation information required by the adaptive coder is presented which results in an overall computational complexity for the coder which is broadly comparable to that of the simpler non-adaptive coder. This thesis is concluded with a discussion of some of the important issues raised by the work.
APA, Harvard, Vancouver, ISO, and other styles
17

Griffin, Joseph C. "Exploring data compression for a distributed aerial relay application." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113152.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 85).
Beamforming systems typically involve arrays of antenna elements with controllable spacing and little or no motion. However, a distributed beamforming system could leave array geometry and motion largely unconstrained. This work considers an airborne relay communication concept with multiple balloons in which the individual array elements act as relays to a receiver on the ground at a base station. The beamforming operation is performed at the receiver. The link between the relays and receiver suffers from a high bandwidth requirement. This thesis explores ways to reduce this bandwidth requirement by compressing the signals across the relays. A distributed compression algorithm is proposed and applied to both simulated and collected data. We conclude that a compression algorithm across the relays offers a substantial decrease in bit rate requirement, and that a preprocessing step can make the compression performance robust against differential delay and Doppler shifts across the array.
by Joseph C. Griffin.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
18

Mueller, Jessie L. "Data compression, storage, and viewing in classroom learning partner." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77029.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 59-60).
In this thesis, we present the design and implementation of a data storage and viewing system for students' classroom work. Our system, which extends the classroom interaction system called Classroom Learning Partner, collects answers sent by students for in-class exercises and allows the teacher to browse through these answers, annotate them, and display them to the class on a public projector. To increase and improve data transmission, our system first intelligently compresses student work. These submissions can be manipulated by a teacher in real time and also are saved to a database for future viewing and study. This duel functionality allows for the analysis of student work from multiple lessons at the same time, as well as backup of student work in case of system failure. Teachers can compare the work from multiple students, as well as create portfolios of student work over time. The data storage and viewing system gives both teachers and researchers a view of both students' learning and how students interact with the software system.
by Jessie L. Mueller.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
19

Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.

Full text
Abstract:
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
20

Misra, Manish. "On-line multivariate chemical data compression and validation using wavelets /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lewis, Andrew Benedict. "Reconstructing compressed photo and video data." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lai, Wai Lok M. Eng Massachusetts Institute of Technology. "A probabilistic graphical model based data compression architecture for Gaussian sources." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/117322.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 107-108).
Data is compressible because of inherent redundancies in the data, mathematically expressed as correlation structures. A data compression algorithm uses the knowledge of these structures to map the original data to a different encoding. The two aspects of data compression, source modeling, ie. using knowledge about the source, and coding, ie. assigning an output sequence of symbols to each output, are not inherently related, but most existing algorithms mix the two and treat the two as one. This work builds on recent research on model-code separation compression architectures to extend this concept into the domain of lossy compression of continuous sources, in particular, Gaussian sources. To our knowledge, this is the first attempt with using with sparse linear coding and discrete-continuous hybrid graphical model decoding for compressing continuous sources. With the flexibility afforded by the modularity of the architecture, we show that the proposed system is free from many inadequacies of existing algorithms, at the same time achieving competitive compression rates. Moreover, the modularity allows for many architectural extensions, with capabilities unimaginable for existing algorithms, including refining of source model after compression, robustness to data corruption, seamless interface with source model parameter learning, and joint homomorphic encryption-compression. This work, meant to be an exploration in a new direction in data compression, is at the intersection of Electrical Engineering and Computer Science, tying together the disciplines of information theory, digital communication, data compression, machine learning, and cryptography.
by Wai Lok Lai.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
23

Bhupathiraju, Kalyan Varma. "Empirical analysis of BWT-based lossless image compression." Morgantown, W. Va. : [West Virginia University Libraries], 2010. http://hdl.handle.net/10450/10958.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2010.
Title from document title page. Document formatted into pages; contains v, 61 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 54-56).
APA, Harvard, Vancouver, ISO, and other styles
24

Jalumuri, Nandakishore R. "A study of scanning paths for BWT based image compression." Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3633.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains vii, 56 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-54).
APA, Harvard, Vancouver, ISO, and other styles
25

Offutt, Andrew Jefferson VI. "Automatic test data generation." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/9167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Maniccam, Suchindran S. "Image-video compression, encryption and information hiding /." Online version via UMI:, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lam, Wai-Yeung. "XCQ : a framework for XML compression and querying /." View abstract or full-text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20LAM.

Full text
Abstract:
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 142-147). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
28

Sugaya, Andrew (Andrew Kiminari). "iDiary : compression, analysis, and visualization of GPS data to predict user activities." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77009.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 91-93).
"What did you do today?" When we hear this question, we try to think back to our day's activities and locations. When we end up drawing a blank on the details of our day, we reply with a simple, "not much." Remembering our daily activities is a difficult task. For some, a manual diary works. For the rest of us, however, we don't have the time to (or simply don't want to) manually enter diary entries. The goal of this thesis is to create a system that automatically generates answers to questions about a user's history of activities and locations. This system uses a user's GPS data to identify locations that have been visited. Activities and terms associated with these locations are found using latent semantic analysis and then presented as a searchable diary. One of the big challenges of working with GPS data is the large amount of data that comes with it, which becomes difficult to store and analyze. This thesis solves this challenge by using compression algorithms to first reduce the amount of data. It is important that this compression does not reduce the fidelity of the information in the data or significantly alter the results of any analyses that may be performed on this data. After this compression, the system analyzes the reduced dataset to answer queries about the user's history. This thesis describes in detail the different components that come together to form this system. These components include the server architecture, the algorithms, the phone application for tracking GPS locations, the flow of data in the system, and the user interfaces for visualizing the results of the system. This thesis also implements this system and performs several experiments. The results show that it is possible to develop a system that automatically generates answers to queries about a user's history.
by Andrew Sugaya.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Hou, Brian Ta-Cheng. "A VLSI architecture for a data compression engine in a communications network." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/29856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lam, Bernard O. Thompson Michael Wayne Duren Russell Walker. "Implementation of lossless compression algorithms for the MIL-STD-1553." Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Guo, Liwei. "Restoration and modeling for multimedia compression /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20GUOL.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jones, Greg 1963-2017. "RADIX 95n: Binary-to-Text Data Conversion." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc500582/.

Full text
Abstract:
This paper presents Radix 95n, a binary to text data conversion algorithm. Radix 95n (base 95) is a variable length encoding scheme that offers slightly better efficiency than is available with conventional fixed length encoding procedures. Radix 95n advances previous techniques by allowing a greater pool of 7-bit combinations to be made available for 8-bit data translation. Since 8-bit data (i.e. binary files) can prove to be difficult to transfer over 7-bit networks, the Radix 95n conversion technique provides a way to convert data such as compiled programs or graphic images to printable ASCII characters and allows for their transfer over 7-bit networks.
APA, Harvard, Vancouver, ISO, and other styles
33

Homayouni, Hajar. "An Approach for Testing the Extract-Transform-Load Process in Data Warehouse Systems." Thesis, Colorado State University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10689298.

Full text
Abstract:

Enterprises use data warehouses to accumulate data from multiple sources for data analysis and research. Since organizational decisions are often made based on the data stored in a data warehouse, all its components must be rigorously tested. In this thesis, we first present a comprehensive survey of data warehouse testing approaches, and then develop and evaluate an automated testing approach for validating the Extract-Transform-Load (ETL) process, which is a common activity in data warehousing.

In the survey we present a classification framework that categorizes the testing and evaluation activities applied to the different components of data warehouses. These approaches include both dynamic analysis as well as static evaluation and manual inspections. The classification framework uses information related to what is tested in terms of the data warehouse component that is validated, and how it is tested in terms of various types of testing and evaluation approaches. We discuss the specific challenges and open problems for each component and propose research directions.

The ETL process involves extracting data from source databases, transforming it into a form suitable for research and analysis, and loading it into a data warehouse. ETL processes can use complex one-to-one, many-to-one, and many-to-many transformations involving sources and targets that use different schemas, databases, and technologies. Since faulty implementations in any of the ETL steps can result in incorrect information in the target data warehouse, ETL processes must be thoroughly validated. In this thesis, we propose automated balancing tests that check for discrepancies between the data in the source databases and that in the target warehouse. Balancing tests ensure that the data obtained from the source databases is not lost or incorrectly modified by the ETL process. First, we categorize and define a set of properties to be checked in balancing tests. We identify various types of discrepancies that may exist between the source and the target data, and formalize three categories of properties, namely, completeness, consistency, and syntactic validity that must be checked during testing. Next, we automatically identify source-to-target mappings from ETL transformation rules provided in the specifications. We identify one-to-one, many-to-one, and many-to-many mappings for tables, records, and attributes involved in the ETL transformations. We automatically generate test assertions to verify the properties for balancing tests. We use the source-to-target mappings to automatically generate assertions corresponding to each property. The assertions compare the data in the target data warehouse with the corresponding data in the sources to verify the properties.

We evaluate our approach on a health data warehouse that uses data sources with different data models running on different platforms. We demonstrate that our approach can find previously undetected real faults in the ETL implementation. We also provide an automatic mutation testing approach to evaluate the fault finding ability of our balancing tests. Using mutation analysis, we demonstrated that our auto-generated assertions can detect faults in the data inside the target data warehouse when faulty ETL scripts execute on mock source data.

APA, Harvard, Vancouver, ISO, and other styles
34

Tobkin, Toby. "Implementation and testing of a blackbox and a whitebox fuzzer for file compression routines." Honors in the Major Thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/923.

Full text
Abstract:
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any "thinking" about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library.
B.S.
Bachelors
Engineering and Computer Science
Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
35

Baek, Seungcheol. "High-performance memory system architectures using data compression." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51863.

Full text
Abstract:
The Chip Multi-Processor (CMP) paradigm has cemented itself as the archetypal philosophy of future microprocessor design. Rapidly diminishing technology feature sizes have enabled the integration of ever-increasing numbers of processing cores on a single chip die. This abundance of processing power has magnified the venerable processor-memory performance gap, which is known as the ”memory wall”. To bridge this performance gap, a high-performing memory structure is needed. An attractive solution to overcoming this processor-memory performance gap is using compression in the memory hierarchy. In this thesis, to use compression techniques more efficiently, compressed cacheline size information is studied, and size-aware cache management techniques and hot-cacheline prediction for dynamic early decompression technique are proposed. Also, the proposed works in this thesis attempt to mitigate the limitations of phase change memory (PCM) such as low write performance and limited long-term endurance. One promising solution is the deployment of hybridized memory architectures that fuse dynamic random access memory (DRAM) and PCM, to combine the best attributes of each technology by using the DRAM as an off-chip cache. A dual-phase compression technique is proposed for high-performing DRAM/PCM hybrid environments and a multi-faceted wear-leveling technique is proposed for the long-term endurance of compressed PCM. This thesis also includes a new compression-based hybrid multi-level cell (MLC)/single-level cell (SLC) PCM management technique that aims to combine the performance edge of SLCs with the higher capacity of MLCs in a hybrid environment.
APA, Harvard, Vancouver, ISO, and other styles
36

Levy, Ian Karl. "Self-similarity and wavelet forms for the compression of still image and video data." Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/4241/.

Full text
Abstract:
This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results.
APA, Harvard, Vancouver, ISO, and other styles
37

Currie, Daniel L. Campbell Hannelore. "Implementation and efficiency of steganographic techniques in bitmapped images and embedded data survivability against lossy compression schemes." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA311535.

Full text
Abstract:
Thesis (M.S. in Computer Science) Naval Postgraduate School, March 1996.
Thesis advisor(s): Cynthia E. Irvine, Harold Fredricksen. "March 1996." Includes bibliography references (p. 37). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
38

Savadatti-Kamath, Sanmati S. "Video analysis and compression for surveillance applications." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26602.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. J. R. Jackson; Committee Member: Dr. D. Scott; Committee Member: Dr. D. V. Anderson; Committee Member: Dr. P. Vela; Committee Member: Dr. R. Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
39

Gergel, Barry, and University of Lethbridge Faculty of Arts and Science. "Automatic compression for image sets using a graph theoretical framework." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2007, 2007. http://hdl.handle.net/10133/538.

Full text
Abstract:
A new automatic compression scheme that adapts to any image set is presented in this thesis. The proposed scheme requires no a priori knowledge on the properties of the image set. This scheme is obtained using a unified graph-theoretical framework that allows for compression strategies to be compared both theoretically and experimentally. This strategy achieves optimal lossless compression by computing a minimum spanning tree of a graph constructed from the image set. For lossy compression, this scheme is near-optimal and a performance guarantee relative to the optimal one is provided. Experimental results demonstrate that this compression strategy compares favorably to the previously proposed strategies, with improvements up to 7% in the case of lossless compression and 72% in the case of lossy compression. This thesis also shows that the choice of underlying compression algorithm is important for compressing image sets using the proposed scheme.
x, 77 leaves ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
40

Wong, Hon Wah. "Image watermarking and data hiding techniques /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20WONGH.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 163-178). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
41

Chapin, Brenton. "Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2909/.

Full text
Abstract:
Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as “aeioubcdf...” instead of standard alphabetical “abcdefghi...” on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted contexts.
APA, Harvard, Vancouver, ISO, and other styles
42

Enriquez, Jesus A. "Lossless compression of Bayer array images using mixed-lattice lifting transforms." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chaulklin, Douglas Gary. "Evaluation of ANSI compression in a bulk data file transfer system." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01202010-020213/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Haley, Brent Kreh. "A Pipeline for the Creation, Compression, and Display of Streamable 3D Motion Capture Based Skeletal Animation Data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1300989069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Howard. "AZIP, audio compression system: Research on audio compression, comparison of psychoacoustic principles and genetic algorithms." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2617.

Full text
Abstract:
The purpose of this project is to investigate the differences between psychoacoustic principles and genetic algorithms (GA0). These will be discussed separately. The review will also compare the compression ratio and the quality of the decompressed files decoded by these two methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Sullivan, Kevin Michael. "An image delta compression tool: IDelta." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2543.

Full text
Abstract:
The purpose of this thesis is to present a modified version of the algorithm used in the open source differencing tool zdelta, entitled "iDelta". This algorithm will manage file data and will be built specifically to difference images in the Photoshop file format.
APA, Harvard, Vancouver, ISO, and other styles
47

Merkl, Frank J. "Binary image compression using run length encoding and multiple scanning techniques /." Online version of thesis, 1988. http://hdl.handle.net/1850/8309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Joshi, Amit Krishna. "Exploiting Alignments in Linked Data for Compression and Query Answering." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1496142816700187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Allan, Todd Stuart 1964. "Adaptive digital image data compression using RIDPCM and a neural network for subimage classification." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278109.

Full text
Abstract:
Recursive Interpolated Differential Pulse Code Modulation (RIDPCM) is a fast and efficient method of digital image data compression. It is a simple algorithm which produces a high quality reconstructed image at a low bit rate. However, RIDPCM compresses the entire image the same regardless of image detail. This paper introduces a variation on RIDPCM which adapts the bit rate according to the detail of the image. Adaptive RIDPCM (ARIDPCM) is accomplished by dividing the original image into smaller subimages and extracting features from them. These subimage features are passed through a trained neural network classifier. The output of the network is a class label which denotes the estimated subimage activity level or subimage type. Each class is assigned a specific bit rate and the subimage information is quantized accordingly. ARIDPCM produces a reconstructed image of higher quality than RIDPCM with the benefit of a further reduced bit rate.
APA, Harvard, Vancouver, ISO, and other styles
50

Radley, Johannes Jurgens. "Pseudo-random access compressed archive for security log data." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1020019.

Full text
Abstract:
We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography