Dissertations / Theses on the topic 'Data compression (Computer science) – Testing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Data compression (Computer science) – Testing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Persson, Jon. "Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2855.
Full textModern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware.
Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods.
A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.
Steinruecken, Christian. "Lossless data compression." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709134.
Full textBarr, Kenneth C. (Kenneth Charles) 1978. "Energy aware lossless data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87316.
Full textDeng, Mo Ph D. Massachusetts Institute of Technology. "On compression of encrypted data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106100.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 93-96).
In this thesis, I took advantage of a model-free compression architecture, where the encoder only makes decision about coding and leaves to the decoder to apply the knowledge of the source for decoding, to attack the problem of compressing encrypted data. Results for compressing different sources encrypted by different class of ciphers are shown and analyzed. Moreover, we generalize the problem from encryption schemes to operations, or data-processing techniques. We try to discover key properties an operation should have, in order to enable good post-operation compression performances.
by Mo Deng.
S.M. in Electrical Engineering
Lee, Joshua Ka-Wing. "A model-adaptive universal data compression architecture with applications to image compression." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111868.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-61).
In this thesis, I designed and implemented a model-adaptive data compression system for the compression of image data. The system is a realization and extension of the Model-Quantizer-Code-Separation Architecture for universal data compression which uses Low-Density-Parity-Check Codes for encoding and probabilistic graphical models and message-passing algorithms for decoding. We implement a lossless bi-level image data compressor as well as a lossy greyscale image compressor and explain how these compressors can rapidly adapt to changes in source models. We then show using these implementations that Restricted Boltzmann Machines are an effective source model for compressing image data compared to other compression methods by comparing compression performance using these source models on various image datasets.
by Joshua Ka-Wing Lee.
S.M.
Toufie, Moegamat Zahir. "Real-time loss-less data compression." Thesis, Cape Technikon, 2000. http://hdl.handle.net/20.500.11838/1367.
Full textData stored on disks generally contain significant redundancy. A mechanism or algorithm that recodes the data to lessen the data size could possibly double or triple the effective data that could be stored on the media. One mechanism of doing this is by data compression. Many compression algorithms currently exist, but each one has its own advantages as well as disadvantages. The objective of this study', to formulate a new compression algorithm that could be implemented in a real-time mode in any file system. The new compression algorithm should also execute as fast as possible, so as not to cause a lag in the file systems performance. This study focuses on binary data of any type, whereas previous articles such as (Huftnlan. 1952:1098), (Ziv & Lempel, 1977:337: 1978:530), (Storer & Szymanski. 1982:928) and (Welch, 1984:8) have placed particular emphasis on text compression in their discussions of compression algorithms for computer data. The resulting compression algorithm that is formulated by this study is Lempel-Ziv-Toutlc (LZT). LZT is basically an LZ77 (Ziv & Lempel, 1977:337) encoder with a buffer size equal in size to that of the data block of the file system in question. LZT does not make this distinction, it discards the sliding buffer principle and uses each data block of the entire input stream. as one big buffer on which compression can be performed. LZT also handles the encoding of a match slightly different to that of LZ77. An LZT match is encoded by two bit streams, the first specifying the position of the match and the other specifying the length of the match. This combination is commonly referred to as a
Aggarwal, Viveka. "Lossless Data Compression for Security Purposes Using Huffman Encoding." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1456848208.
Full textCabrera-Mercader, Carlos R. (Carlos Rubén). "Robust compression of multispectral remote sensing data." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9338.
Full textIncludes bibliographical references (p. 241-246).
This thesis develops efficient and robust non-reversible coding algorithms for multispectral remote sensing data. Although many efficient non-reversible coding algorithms have been proposed for such data, their application is often limited due to the risk of excessively degrading the data if, for example, changes in sensor characteristics and atmospheric/surface statistics occur. On the other hand, reversible coding algorithms are inherently robust to variable conditions but they provide only limited compression when applied to data from most modern remote sensors. The algorithms developed in this work achieve high data compression by preserving only data variations containing information about the ideal, noiseless spectrum, and by exploiting inter-channel correlations in the data. The algorithms operate on calibrated data modeled as the sum of the ideal spectrum, and an independent noise component due to sensor noise, calibration error, and, possibly, impulsive noise. Coding algorithms are developed for data with and without impulsive noise. In both cases an estimate of the ideal spectrum is computed first, and then that estimate is coded efficiently. This estimator coder structure is implemented mainly using data-dependent matrix operators and scalar quantization. Both coding algorithms are robust to slow instrument drift, addressed by appropriate calibration, and outlier channels. The outliers are preserved by separately coding the noise estimates in addition to the signal estimates so that they may be reconstructed at the original resolution. In addition, for data free of impulsive noise the coding algorithm adapts to changes in the second-order statistics of the data by estimating those statistics from each block of data to be coded. The coding algorithms were tested on data simulated for the NASA 2378-channel Atmospheric Infrared Sounder (AIRS). Near-lossless compression ratios of up to 32:1 (0.4 bits/pixel/channel) were obtained in the absence of impulsive noise, without preserving outliers, and assuming the nominal noise covariance. An average noise variance reduction of 12-14 dB was obtained simultaneously for data blocks of 2400-7200 spectra. Preserving outlier channels for which the noise estimates exceed three times the estimated noise rms value would require no more than 0.08 bits/pixel/channel provided the outliers arise from the assumed noise distribution. If contaminant outliers occurred, higher bit rates would be required. Similar performance was obtained for spectra corrupted by few impulses.
by Carlos R. Cabrera-Mercader.
Ph.D.
Lehman, Eric (Eric Allen) 1970. "Approximation algorithms for grammar-based data compression." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87172.
Full textIncludes bibliographical references (p. 109-113).
This thesis considers the smallest grammar problem: find the smallest context-free grammar that generates exactly one given string. We show that this problem is intractable, and so our objective is to find approximation algorithms. This simple question is connected to many areas of research. Most importantly, there is a link to data compression; instead of storing a long string, one can store a small grammar that generates it. A small grammar for a string also naturally brings out underlying patterns, a fact that is useful, for example, in DNA analysis. Moreover, the size of the smallest context-free grammar generating a string can be regarded as a computable relaxation of Kolmogorov complexity. Finally, work on the smallest grammar problem qualitatively extends the study of approximation algorithms to hierarchically-structured objects. In this thesis, we establish hardness results, evaluate several previously proposed algorithms, and then present new procedures with much stronger approximation guarantees.
by Eric Lehman.
Ph.D.
Koutsogiannis, Vassilis. "A study of color image data compression /." Online version of thesis, 1992. http://hdl.handle.net/1850/11060.
Full textBunton, Suzanne. "On-line stochastic processes in data compression /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6931.
Full textOtten, Frederick John. "Using semantic knowledge to improve compression on log files." Thesis, Rhodes University, 2008. http://eprints.ru.ac.za/1660/.
Full textZhao, Ying. "Turbo codes for data compression and joint source-channel coding." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 112 p, 2007. http://proquest.umi.com/pqdlink?did=1251904871&Fmt=7&clientId=79356&RQT=309&VName=PQD.
Full textSestok, Charles K. (Charles Kasimer). "Data selection in binary hypothesis testing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16613.
Full textIncludes bibliographical references (p. 119-123).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Traditionally, statistical signal processing algorithms are developed from probabilistic models for data. The design of the algorithms and their ultimate performance depend upon these assumed models. In certain situations, collecting or processing all available measurements may be inefficient or prohibitively costly. A potential technique to cope with such situations is data selection, where a subset of the measurements that can be collected and processed in a cost-effective manner is used as input to the signal processing algorithm. Careful evaluation of the selection procedure is important, since the probabilistic description of distinct data subsets can vary significantly. An algorithm designed for the probabilistic description of a poorly chosen data subset can lose much of the potential performance available to a well-chosen subset. This thesis considers algorithms for data selection combined with binary hypothesis testing. We develop models for data selection in several cases, considering both random and deterministic approaches. Our considerations are divided into two classes depending upon the amount of information available about the competing hypotheses. In the first class, the target signal is precisely known, and data selection is done deterministically. In the second class, the target signal belongs to a large class of random signals, selection is performed randomly, and semi-parametric detectors are developed.
by Charles K. Sestok, IV.
Ph.D.
Weston, Bron O. Duren Russell Walker Thompson Michael Wayne. "Data compression application to the MIL-STD 1553 avionics data bus." Waco, Tex. : Baylor University, 2005. http://hdl.handle.net/2104/2882.
Full textTodd, Martin Peter. "Image data compression based on a multiresolution signal model." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/100937/.
Full textGriffin, Joseph C. "Exploring data compression for a distributed aerial relay application." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113152.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 85).
Beamforming systems typically involve arrays of antenna elements with controllable spacing and little or no motion. However, a distributed beamforming system could leave array geometry and motion largely unconstrained. This work considers an airborne relay communication concept with multiple balloons in which the individual array elements act as relays to a receiver on the ground at a base station. The beamforming operation is performed at the receiver. The link between the relays and receiver suffers from a high bandwidth requirement. This thesis explores ways to reduce this bandwidth requirement by compressing the signals across the relays. A distributed compression algorithm is proposed and applied to both simulated and collected data. We conclude that a compression algorithm across the relays offers a substantial decrease in bit rate requirement, and that a preprocessing step can make the compression performance robust against differential delay and Doppler shifts across the array.
by Joseph C. Griffin.
M. Eng.
Mueller, Jessie L. "Data compression, storage, and viewing in classroom learning partner." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77029.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 59-60).
In this thesis, we present the design and implementation of a data storage and viewing system for students' classroom work. Our system, which extends the classroom interaction system called Classroom Learning Partner, collects answers sent by students for in-class exercises and allows the teacher to browse through these answers, annotate them, and display them to the class on a public projector. To increase and improve data transmission, our system first intelligently compresses student work. These submissions can be manipulated by a teacher in real time and also are saved to a database for future viewing and study. This duel functionality allows for the analysis of student work from multiple lessons at the same time, as well as backup of student work in case of system failure. Teachers can compare the work from multiple students, as well as create portfolios of student work over time. The data storage and viewing system gives both teachers and researchers a view of both students' learning and how students interact with the software system.
by Jessie L. Mueller.
M.Eng.
Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.
Full textApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Misra, Manish. "On-line multivariate chemical data compression and validation using wavelets /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textLewis, Andrew Benedict. "Reconstructing compressed photo and video data." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610141.
Full textLai, Wai Lok M. Eng Massachusetts Institute of Technology. "A probabilistic graphical model based data compression architecture for Gaussian sources." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/117322.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 107-108).
Data is compressible because of inherent redundancies in the data, mathematically expressed as correlation structures. A data compression algorithm uses the knowledge of these structures to map the original data to a different encoding. The two aspects of data compression, source modeling, ie. using knowledge about the source, and coding, ie. assigning an output sequence of symbols to each output, are not inherently related, but most existing algorithms mix the two and treat the two as one. This work builds on recent research on model-code separation compression architectures to extend this concept into the domain of lossy compression of continuous sources, in particular, Gaussian sources. To our knowledge, this is the first attempt with using with sparse linear coding and discrete-continuous hybrid graphical model decoding for compressing continuous sources. With the flexibility afforded by the modularity of the architecture, we show that the proposed system is free from many inadequacies of existing algorithms, at the same time achieving competitive compression rates. Moreover, the modularity allows for many architectural extensions, with capabilities unimaginable for existing algorithms, including refining of source model after compression, robustness to data corruption, seamless interface with source model parameter learning, and joint homomorphic encryption-compression. This work, meant to be an exploration in a new direction in data compression, is at the intersection of Electrical Engineering and Computer Science, tying together the disciplines of information theory, digital communication, data compression, machine learning, and cryptography.
by Wai Lok Lai.
M. Eng.
Bhupathiraju, Kalyan Varma. "Empirical analysis of BWT-based lossless image compression." Morgantown, W. Va. : [West Virginia University Libraries], 2010. http://hdl.handle.net/10450/10958.
Full textTitle from document title page. Document formatted into pages; contains v, 61 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 54-56).
Jalumuri, Nandakishore R. "A study of scanning paths for BWT based image compression." Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3633.
Full textTitle from document title page. Document formatted into pages; contains vii, 56 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-54).
Offutt, Andrew Jefferson VI. "Automatic test data generation." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/9167.
Full textManiccam, Suchindran S. "Image-video compression, encryption and information hiding /." Online version via UMI:, 2001.
Find full textLam, Wai-Yeung. "XCQ : a framework for XML compression and querying /." View abstract or full-text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20LAM.
Full textIncludes bibliographical references (leaves 142-147). Also available in electronic version. Access restricted to campus users.
Sugaya, Andrew (Andrew Kiminari). "iDiary : compression, analysis, and visualization of GPS data to predict user activities." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77009.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 91-93).
"What did you do today?" When we hear this question, we try to think back to our day's activities and locations. When we end up drawing a blank on the details of our day, we reply with a simple, "not much." Remembering our daily activities is a difficult task. For some, a manual diary works. For the rest of us, however, we don't have the time to (or simply don't want to) manually enter diary entries. The goal of this thesis is to create a system that automatically generates answers to questions about a user's history of activities and locations. This system uses a user's GPS data to identify locations that have been visited. Activities and terms associated with these locations are found using latent semantic analysis and then presented as a searchable diary. One of the big challenges of working with GPS data is the large amount of data that comes with it, which becomes difficult to store and analyze. This thesis solves this challenge by using compression algorithms to first reduce the amount of data. It is important that this compression does not reduce the fidelity of the information in the data or significantly alter the results of any analyses that may be performed on this data. After this compression, the system analyzes the reduced dataset to answer queries about the user's history. This thesis describes in detail the different components that come together to form this system. These components include the server architecture, the algorithms, the phone application for tracking GPS locations, the flow of data in the system, and the user interfaces for visualizing the results of the system. This thesis also implements this system and performs several experiments. The results show that it is possible to develop a system that automatically generates answers to queries about a user's history.
by Andrew Sugaya.
M.Eng.
Hou, Brian Ta-Cheng. "A VLSI architecture for a data compression engine in a communications network." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/29856.
Full textLam, Bernard O. Thompson Michael Wayne Duren Russell Walker. "Implementation of lossless compression algorithms for the MIL-STD-1553." Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5294.
Full textGuo, Liwei. "Restoration and modeling for multimedia compression /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20GUOL.
Full textJones, Greg 1963-2017. "RADIX 95n: Binary-to-Text Data Conversion." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc500582/.
Full textHomayouni, Hajar. "An Approach for Testing the Extract-Transform-Load Process in Data Warehouse Systems." Thesis, Colorado State University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10689298.
Full textEnterprises use data warehouses to accumulate data from multiple sources for data analysis and research. Since organizational decisions are often made based on the data stored in a data warehouse, all its components must be rigorously tested. In this thesis, we first present a comprehensive survey of data warehouse testing approaches, and then develop and evaluate an automated testing approach for validating the Extract-Transform-Load (ETL) process, which is a common activity in data warehousing.
In the survey we present a classification framework that categorizes the testing and evaluation activities applied to the different components of data warehouses. These approaches include both dynamic analysis as well as static evaluation and manual inspections. The classification framework uses information related to what is tested in terms of the data warehouse component that is validated, and how it is tested in terms of various types of testing and evaluation approaches. We discuss the specific challenges and open problems for each component and propose research directions.
The ETL process involves extracting data from source databases, transforming it into a form suitable for research and analysis, and loading it into a data warehouse. ETL processes can use complex one-to-one, many-to-one, and many-to-many transformations involving sources and targets that use different schemas, databases, and technologies. Since faulty implementations in any of the ETL steps can result in incorrect information in the target data warehouse, ETL processes must be thoroughly validated. In this thesis, we propose automated balancing tests that check for discrepancies between the data in the source databases and that in the target warehouse. Balancing tests ensure that the data obtained from the source databases is not lost or incorrectly modified by the ETL process. First, we categorize and define a set of properties to be checked in balancing tests. We identify various types of discrepancies that may exist between the source and the target data, and formalize three categories of properties, namely, completeness, consistency, and syntactic validity that must be checked during testing. Next, we automatically identify source-to-target mappings from ETL transformation rules provided in the specifications. We identify one-to-one, many-to-one, and many-to-many mappings for tables, records, and attributes involved in the ETL transformations. We automatically generate test assertions to verify the properties for balancing tests. We use the source-to-target mappings to automatically generate assertions corresponding to each property. The assertions compare the data in the target data warehouse with the corresponding data in the sources to verify the properties.
We evaluate our approach on a health data warehouse that uses data sources with different data models running on different platforms. We demonstrate that our approach can find previously undetected real faults in the ETL implementation. We also provide an automatic mutation testing approach to evaluate the fault finding ability of our balancing tests. Using mutation analysis, we demonstrated that our auto-generated assertions can detect faults in the data inside the target data warehouse when faulty ETL scripts execute on mock source data.
Tobkin, Toby. "Implementation and testing of a blackbox and a whitebox fuzzer for file compression routines." Honors in the Major Thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/923.
Full textB.S.
Bachelors
Engineering and Computer Science
Electrical Engineering and Computer Science
Baek, Seungcheol. "High-performance memory system architectures using data compression." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51863.
Full textLevy, Ian Karl. "Self-similarity and wavelet forms for the compression of still image and video data." Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/4241/.
Full textCurrie, Daniel L. Campbell Hannelore. "Implementation and efficiency of steganographic techniques in bitmapped images and embedded data survivability against lossy compression schemes." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA311535.
Full textThesis advisor(s): Cynthia E. Irvine, Harold Fredricksen. "March 1996." Includes bibliography references (p. 37). Also available online.
Savadatti-Kamath, Sanmati S. "Video analysis and compression for surveillance applications." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26602.
Full textCommittee Chair: Dr. J. R. Jackson; Committee Member: Dr. D. Scott; Committee Member: Dr. D. V. Anderson; Committee Member: Dr. P. Vela; Committee Member: Dr. R. Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Gergel, Barry, and University of Lethbridge Faculty of Arts and Science. "Automatic compression for image sets using a graph theoretical framework." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2007, 2007. http://hdl.handle.net/10133/538.
Full textx, 77 leaves ; 29 cm.
Wong, Hon Wah. "Image watermarking and data hiding techniques /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20WONGH.
Full textIncludes bibliographical references (leaves 163-178). Also available in electronic version. Access restricted to campus users.
Chapin, Brenton. "Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2909/.
Full textEnriquez, Jesus A. "Lossless compression of Bayer array images using mixed-lattice lifting transforms." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.
Full textChaulklin, Douglas Gary. "Evaluation of ANSI compression in a bulk data file transfer system." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01202010-020213/.
Full textHaley, Brent Kreh. "A Pipeline for the Creation, Compression, and Display of Streamable 3D Motion Capture Based Skeletal Animation Data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1300989069.
Full textChen, Howard. "AZIP, audio compression system: Research on audio compression, comparison of psychoacoustic principles and genetic algorithms." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2617.
Full textSullivan, Kevin Michael. "An image delta compression tool: IDelta." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2543.
Full textMerkl, Frank J. "Binary image compression using run length encoding and multiple scanning techniques /." Online version of thesis, 1988. http://hdl.handle.net/1850/8309.
Full textJoshi, Amit Krishna. "Exploiting Alignments in Linked Data for Compression and Query Answering." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1496142816700187.
Full textAllan, Todd Stuart 1964. "Adaptive digital image data compression using RIDPCM and a neural network for subimage classification." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278109.
Full textRadley, Johannes Jurgens. "Pseudo-random access compressed archive for security log data." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1020019.
Full text