To see the other types of publications on this topic, follow the link: Digital image compression.

Dissertations / Theses on the topic 'Digital image compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital image compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Abdul-Amir, Said. "Digital image compression." Thesis, De Montfort University, 1985. http://hdl.handle.net/2086/10681.

Full text
Abstract:
Due to the rapid growth in information handling and transmission, there is a serious demand for more efficient data compression schemes. compression schemes address themselves to speech, visual and alphanumeric coded data. This thesis is concerned with the compression of visual data given in the form of still or moving pictures. such data is highly correlated spatially and in the context domain. A detailed study of some existing data compression systems is presented, in particular, the performance of DPCM was analysed by computer simulation, and the results examined both subjectively and objectively. The adaptive form of the prediction encoder is discussed and two new algorithms proposed, which increase the definition of the compressed image and reduce the overall mean square error. Two novel systems are proposed for image compression. The first is a bit plane image coding system based on a hierarchic quadtree structure in a transmission domain, using the Hadamard transform as a kernel. Good compression has been achieved from this scheme, particularly for images with low detail. The second scheme uses a learning automata to predict the probability distribution of the grey levels of an image related to its spatial context and position. An optimal reward/punishment function is proposed such that the automata converges to its steady state within 4000 iterations • such a high speed of convergence together with Huffman coding results in efficient compression for images and is shown to be applicable to other types of data. . The performance and evaluation of all the proposed .'systems have been tested by computer simulation and the results presented both quantitatively and qualitatively."The advantages and disadvantages of each system are discussed and suggestions for improvement. given.
APA, Harvard, Vancouver, ISO, and other styles
2

Tokdemir, Serpil. "Digital compression on GPU." unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-12012006-154433/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2006.
Title from dissertation title page. Saeid Belkasim, committee chair; Ying Zhu, A.P. Preethy, committee members. Electronic text (90 p. : ill. (some col.)). Description based on contents viewed May 2, 2007. Includes bibliographical references (p. 78-81).
APA, Harvard, Vancouver, ISO, and other styles
3

Wyllie, Michael. "A comparative quantitative approach to digital image compression." Huntington, WV : [Marshall University Libraries], 2006. http://www.marshall.edu/etd/descript.asp?ref=719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Truong, Huy S. "Signal compression for digital television." Curtin University of Technology, School of Electrical and Computer Engineering, 1999. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=12068.

Full text
Abstract:
Still image and image sequence compression plays an important role in the development of digital television. Although various still image and image sequence compression algorithms have already been developed, it is very difficult for them to achieve both compression performance and coding efficiency simultaneously due to the complexity of the compression process itself. As a results, improvements in the forms of hybrid coding, coding procedure refinement, new algorithms and even new coding concepts have been constantly tried, some offering very encouraging results.In this thesis, Block Adaptive Classified Vector Quantisation (BACVQ) has been developed as an alternative algorithm for still image compression. It is found that BACVQ achieves good compression performance and coding efficiency by combining variable block-size coding and classified VQ. Its performance is further enhanced by adopting both spatial and transform domain criteria for the image block segmentation and classification process. Alternative algorithms have also been developed to accelerate normal codebook searching operation and to determine the optimal sizes of classified VQ sub-codebooks.For image sequence compression, an adaptive spatial/temporal compression algorithm has been developed which divides an image sequence into smaller groups of pictures (GOP) using adaptive scene segmentation before BACVQ and variable block-size motion compensated predictive coding are applied to the intraframe and interframe coding processes. It is found the application of the proposed adaptive scene segmentation algorithm, an alternative motion estimation strategy and a new progressive motion estimation algorithm enables the performance and efficiency of the compression process to be improved even further.Apart from improving still image and image sequence compression algorithms, the application of parallel ++
processing to image sequence compression is also investigated. Parallel image compression offers a more effective approach than the sequential counterparts to accelerate the compression process and bring it closer to real-time operation. In this study, a small scale parallel digital signal processing platform has been constructed for supporting parallel image sequence compression operation. It consists of a 486DX33 IBM/PC serving as a master processor and two DSP (PC-32) cards as parallel processors. Because of the independent processing and spatial arrangement natures of most image processing operations, an effective parallel image sequence compression algorithm has been developed on the proposed parallel processing platform to significantly reduce the processing time of the proposed parallel image compression algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, Deng Yuan. "ADAPTIVE DIGITAL IMAGE DATA COMPRESSION BY RECURSIVE IDPCM." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ng, King-to, and 吳景濤. "Compression techniques for image-based representations." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31244646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khire, Sourabh Mohan. "Time-sensitive communication of digital images, with applications in telepathology." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29761.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Jayant, Nikil; Committee Member: Anderson, David; Committee Member: Lee, Chin-Hui. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
8

Addlesee, Michael Dennis. "Aspects of image compression using the subband technique." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sanikomm, Vikas Kumar Reddy. "Hardware Implementation of a Novel Image Compression Algorithm." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/1032.

Full text
Abstract:
Image-related communications are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Image compression is important for effective storage and transmission of images. Many techniques have been developed in the past, including transform coding, vector quantization and neural networks. In this thesis, a novel adaptive compression technique is introduced based on adaptive rather than fixed transforms for image compression. The proposed technique is similar to Neural Network (NN)-based image compression and its superiority over other techniques is presented It is shown that the proposed algorithm results in higher image quality for a given compression ratio than existing Neural Network algorithms and that the training of this algorithm is significantly faster than the NN based algorithms. This is also compared to the JPEG in terms of Peak Signal to Noise Ratio (PSNR) for a given compression ratio and computational complexity. Advantages of this idea over JPEG are also presented in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
10

Akhlaghian, Tab Fardin. "Multiresolution scalable image and video segmentation." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060227.100704/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Williams, Saunya Michelle. "Effects of image compression on data interpretation for telepathology." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42762.

Full text
Abstract:
When geographical distance poses as a barrier, telepathology is designed to offer pathologists the opportunity to replicate their normal activities by using an alternative means of practice. The rapid progression in technology has greatly influenced the appeal of telepathology and its use in multiple domains. To that point, telepathology systems help to afford teleconsultation services for remote locations, improve the workload distribution in clinical environments, measure quality assurance, and also enhance educational programs. While telepathology is an attractive method to many potential users, the resource requirements for digitizing microscopic specimens have hindered widespread adoption. The use of image compression is extremely critical to help advance the pervasiveness of digital images in pathology. For this research, we characterize two different methods that we use to assess compression of pathology images. Our first method is characterized by the fact that image quality is human-based and completely subjective in terms of interpretation. Our second method is characterized by the fact that image analysis is introduced by using machine-based interpretation to provide objective results. Additionally, the objective outcomes from the image analysis may also be used to help confirm tumor classification. With these two methods in mind, the purpose of this dissertation is to quantify the effects of image compression on data interpretation as seen by human experts and a computerized algorithm for use in telepathology.
APA, Harvard, Vancouver, ISO, and other styles
12

Nolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
APA, Harvard, Vancouver, ISO, and other styles
13

Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.

Full text
Abstract:
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
14

Brower, Bernard V. "Evaluation of digital image compression algorithms for use on laptop computers /." Online version of thesis, 1992. http://hdl.handle.net/1850/11893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

El-Sakka, Mahmoud R. "Adaptive digital image compression based on segmentation and block classification." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0001/NQ44784.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

El-Sakka, Mahmoud R. "Adaptive digital image compression based on segmentation and block classification." Ottawa : National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.nlc-bnc.ca/obj/s4/f2/dsk1/tape11/PQDD%5F0001/NQ44784.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wong, Chi-wah Alec, and 王梓樺. "Exploiting wireless link adaptation and region-of-interest processing to improve real-time scalable video transmission." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29804152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gao, Wenfeng. "Real-time video postprocessing algorithms and metrics /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Keisarian, Farhad. "A pyramid image coder using Block Template Matching (BTM) algorithm." Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dorensky, Oleksandr. "Comparative research of the color brightness distortion of the image compressed by DHT." Thesis, Lviv Polytechnic Publishing House, 2013. http://dspace.kntu.kr.ua/jspui/handle/123456789/2961.

Full text
Abstract:
The review examines the usage of DHT for the digital image compression. The results of the comparative research of the impact of usage DHT compression on the brightness of the color digital image are provided.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Sam J. "Low bit-rate image and video compression using adaptive segmentation and quantization." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sullivan, Kevin Michael. "An image delta compression tool: IDelta." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2543.

Full text
Abstract:
The purpose of this thesis is to present a modified version of the algorithm used in the open source differencing tool zdelta, entitled "iDelta". This algorithm will manage file data and will be built specifically to difference images in the Photoshop file format.
APA, Harvard, Vancouver, ISO, and other styles
23

鄧世健 and Sai-kin Owen Tang. "Implementation of Low bit-rate image codec." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Collaer, Marcia Lee. "IMAGE DATA COMPRESSION: DIFFERENTIAL PULSE CODE MODULATION OF TOMOGRAPHIC PROJECTIONS." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/291412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Aburas, Abdul Razag Ali. "Data compression schemes for pattern recognition in digital images using fractals." Thesis, De Montfort University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Man, Hong. "On efficiency and robustness of adaptive quantization for subband coding of images and video sequences." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wong, Hon Wah. "Image watermarking and data hiding techniques /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20WONGH.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 163-178). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
28

Arrowood, Joseph Louis Jr. "Theory and application of adaptive filter banks." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Hain-Ching. "Automatic scene detection in MPEG digital video for random access indexing and MPEG compression optimization /." Thesis, Connect to this title online; UW restricted, 1995. http://hdl.handle.net/1773/6001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tang, Sai-kin Owen. "Implementation of Low bit-rate image codec /." [Hong Kong] : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14804402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sinha, Anurag R. "Optimization of a new digital image compression algorithm based on nonlinear dynamical systems /." Online version of thesis, 2008. http://hdl.handle.net/1850/5544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Abdelkarim, Ahmad Ali. "Effect of JPEG2000 compression on landmark identification of lateral cephalometric digital radiographs a thesis /." San Antonio : UTHSC, 2008. http://learningobjects.library.uthscsa.edu/cdm4/item_viewer.php?CISOROOT=/theses&CISOPTR=57&CISOBOX=1&REC=16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Papadopoulos, Constantinos A. "The use of geometric transformations for motion compensation in video data compression." Thesis, King's College London (University of London), 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wei, Ming. "A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3074/.

Full text
Abstract:
In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, being comparable to the H263+ scheme, with a slightly lower (Peak Signal Noise Ratio (PSNR) value, produces a more visually pleasing result. This implementation also preserves scalability of wavelet embedded coding technique. Thirdly, the scheme to handle optimal bit allocation among color bands for still imagery has been modified and extended to accommodate the spatial-temporal sensitivity of the HVS model. The bit allocation among color bands based on Kelly's spatio-temporal CSF model is designed to achieve the perceptual optimum for human eyes. A perceptually tuned, wavelet based, rate scalable video encoding/decoding system has been designed and implemented based on this new bit allocation scheme. Finally to present the potential applications of our rate scalable video codec, a prototype system for rate scalable video streaming over the Internet has been designed and implemented to deal with the bandwidth unpredictability of the Internet.
APA, Harvard, Vancouver, ISO, and other styles
35

Tsujiguchi, Vitor Hitoshi. "Identificação da correlação entre as características das imagens de documentos e os impactos na fidelidade visual em função da taxa de compressão." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-19032012-112737/.

Full text
Abstract:
Imagens de documentos são documentos digitalizados com conteúdo textual. Estes documentos são compostos de caracteres e diagramação, apresentando características comuns entre si, como a presença de bordas e limites no formato de cada caractere. A relação entre as características das imagens de documentos e os impactos do processo de compressão com respeito à fidelidade visual são analisadas nesse trabalho. Métricas objetivas são empregadas na análise das características das imagens de documentos, como a medida da atividade da imagem (IAM) no domínio espacial dos pixels, e a verificação da medida de atividade espectral (SAM) no domínio espectral. Os desempenhos das técnicas de compressão de imagens baseada na transformada discreta de cosseno (DCT) e na transformada discreta de Wavelet (DWT) são avaliados sobre as imagens de documentos ao aplicar diferentes níveis de compressão sobre as mesmas, para cada técnica. Os experimentos são realizados sobre imagens digitais de documentos impressos e manuscritos de livros e periódicos, explorando texto escritos entre os séculos 16 ao século 19. Este material foi coletado na biblioteca Brasiliana Digital (www.brasiliana.usp.br), no Brasil. Resultados experimentais apontam que as medidas de atividade nos domínios espacial e espectral influenciam diretamente a fidelidade visual das imagens comprimidas para ambas as técnicas baseadas em DCT e DWT. Para uma taxa de compressão fixa de uma imagem comprimida em ambas técnicas, a presença de valores superiores de IAM e níveis menores de SAM na imagem de referência resultam em menor fidelidade visual, após a compressão.
Document images are digitized documents with textual content. These documents are composed of characters and their layout, with common characteristics among them, such as the presence of borders and boundaries in the shape of each character. The relationship between the characteristics of document images and the impact of the compression process with respect to visual fidelity are analyzed herein. Objective metrics are employed to analyze the characteristics of document images, such as the Image Activity Measure (IAM) in the spatial domain, and assessment of Spectral Activity Measure (SAM) in the spectral domain. The performance of image compression techniques based on Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are evaluated from document images by applying different compression levels for each technique to these images. The experiments are performed on digital images of printed documents and manuscripts of books and magazines, exploring texts written from the 16th to the 19th century. This material was collected in the Brasiliana Digital Library in Brazil. Experimental results show that the activity measures in spatial and spectral domains directly influence the visual fidelity of compressed images for both the techniques based on DCT and DWT. For a fixed compression ratio for both techniques on a compressed image, higher values of IAM and low levels of SAM in the reference image result in less visual fidelity after compression.
APA, Harvard, Vancouver, ISO, and other styles
36

Kang, Jung Won. "Effective temporal video segmentation and content-based audio-visual video clustering." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/13731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Allan, Todd Stuart 1964. "Adaptive digital image data compression using RIDPCM and a neural network for subimage classification." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278109.

Full text
Abstract:
Recursive Interpolated Differential Pulse Code Modulation (RIDPCM) is a fast and efficient method of digital image data compression. It is a simple algorithm which produces a high quality reconstructed image at a low bit rate. However, RIDPCM compresses the entire image the same regardless of image detail. This paper introduces a variation on RIDPCM which adapts the bit rate according to the detail of the image. Adaptive RIDPCM (ARIDPCM) is accomplished by dividing the original image into smaller subimages and extracting features from them. These subimage features are passed through a trained neural network classifier. The output of the network is a class label which denotes the estimated subimage activity level or subimage type. Each class is assigned a specific bit rate and the subimage information is quantized accordingly. ARIDPCM produces a reconstructed image of higher quality than RIDPCM with the benefit of a further reduced bit rate.
APA, Harvard, Vancouver, ISO, and other styles
38

Testoni, Vanessa. "Contribuições em codificação de imagens e vídeo = Contributions in image and video coding." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261024.

Full text
Abstract:
Orientador: Max Henrique Machado Costa
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-19T09:28:05Z (GMT). No. of bitstreams: 1 Testoni_Vanessa_D.pdf: 8106484 bytes, checksum: 21b33811983b7a26e8c1bab2e56ac0aa (MD5) Previous issue date: 2011
Resumo: A comunidade de codificação de imagens e vídeo vem também trabalhando em inovações que vão além das tradicionais técnicas de codificação de imagens e vídeo. Este trabalho é um conjunto de contribuições a vários tópicos que têm recebido crescente interesse de pesquisadores na comunidade, nominalmente, codificação escalável, codificação de baixa complexidade para dispositivos móveis, codificação de vídeo de múltiplas vistas e codificação adaptativa em tempo real. A primeira contribuição estuda o desempenho de três transformadas 3-D rápidas por blocos em um codificador de vídeo de baixa complexidade. O codificador recebeu o nome de Fast Embedded Video Codec (FEVC). Novos métodos de implementação e ordens de varredura são propostos para as transformadas. Os coeficiente 3-D são codificados por planos de bits pelos codificadores de entropia, produzindo um fluxo de bits (bitstream) de saída totalmente embutida. Todas as implementações são feitas usando arquitetura com aritmética inteira de 16 bits. Somente adições e deslocamentos de bits são necessários, o que reduz a complexidade computacional. Mesmo com essas restrições, um bom desempenho em termos de taxa de bits versus distorção pôde ser obtido e os tempos de codificação são significativamente menores (em torno de 160 vezes) quando comparados ao padrão H.264/AVC. A segunda contribuição é a otimização de uma recente abordagem proposta para codificação de vídeo de múltiplas vistas em aplicações de video-conferência e outras aplicações do tipo "unicast" similares. O cenário alvo nessa abordagem é fornecer vídeo com percepção real em 3-D e ponto de vista livre a boas taxas de compressão. Para atingir tal objetivo, pesos são atribuídos a cada vista e mapeados em parâmetros de quantização. Neste trabalho, o mapeamento ad-hoc anteriormente proposto entre pesos e parâmetros de quantização é mostrado ser quase-ótimo para uma fonte Gaussiana e um mapeamento ótimo é derivado para fonte típicas de vídeo. A terceira contribuição explora várias estratégias para varredura adaptativa dos coeficientes da transformada no padrão JPEG XR. A ordem de varredura original, global e adaptativa do JPEG XR é comparada com os métodos de varredura localizados e híbridos propostos neste trabalho. Essas novas ordens não requerem mudanças nem nos outros estágios de codificação e decodificação, nem na definição da bitstream A quarta e última contribuição propõe uma transformada por blocos dependente do sinal. As transformadas hierárquicas usualmente exploram a informação residual entre os níveis no estágio da codificação de entropia, mas não no estágio da transformada. A transformada proposta neste trabalho é uma técnica de compactação de energia que também explora as similaridades estruturais entre os níveis de resolução. A idéia central da técnica é incluir na transformada hierárquica um número de funções de base adaptativas derivadas da resolução menor do sinal. Um codificador de imagens completo foi desenvolvido para medir o desempenho da nova transformada e os resultados obtidos são discutidos neste trabalho
Abstract: The image and video coding community has often been working on new advances that go beyond traditional image and video architectures. This work is a set of contributions to various topics that have received increasing attention from researchers in the community, namely, scalable coding, low-complexity coding for portable devices, multiview video coding and run-time adaptive coding. The first contribution studies the performance of three fast block-based 3-D transforms in a low complexity video codec. The codec has received the name Fast Embedded Video Codec (FEVC). New implementation methods and scanning orders are proposed for the transforms. The 3-D coefficients are encoded bit-plane by bit-plane by entropy coders, producing a fully embedded output bitstream. All implementation is performed using 16-bit integer arithmetic. Only additions and bit shifts are necessary, thus lowering computational complexity. Even with these constraints, reasonable rate versus distortion performance can be achieved and the encoding time is significantly smaller (around 160 times) when compared to the H.264/AVC standard. The second contribution is the optimization of a recent approach proposed for multiview video coding in videoconferencing applications or other similar unicast-like applications. The target scenario in this approach is providing realistic 3-D video with free viewpoint video at good compression rates. To achieve such an objective, weights are computed for each view and mapped into quantization parameters. In this work, the previously proposed ad-hoc mapping between weights and quantization parameters is shown to be quasi-optimum for a Gaussian source and an optimum mapping is derived for a typical video source. The third contribution exploits several strategies for adaptive scanning of transform coefficients in the JPEG XR standard. The original global adaptive scanning order applied in JPEG XR is compared with the localized and hybrid scanning methods proposed in this work. These new orders do not require changes in either the other coding and decoding stages or in the bitstream definition. The fourth and last contribution proposes an hierarchical signal dependent block-based transform. Hierarchical transforms usually exploit the residual cross-level information at the entropy coding step, but not at the transform step. The transform proposed in this work is an energy compaction technique that can also exploit these cross-resolution-level structural similarities. The core idea of the technique is to include in the hierarchical transform a number of adaptive basis functions derived from the lower resolution of the signal. A full image codec is developed in order to measure the performance of the new transform and the obtained results are discussed in this work
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
39

Klausutis, Timothy J. "Adaptive lapped transforms with applications to image coding." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Nair, Prashant. "Designing low power SRAM system using energy compression." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47663.

Full text
Abstract:
The power consumption in commercial processors and application specific integrated circuits increases with decreasing technology nodes. Power saving techniques have become a first class design point for current and future VLSI systems. These systems employ large on-chip SRAM memories. Reducing memory leakage power while maintaining data integrity is a key criterion for modern day systems. Unfortunately, state of the art techniques like power-gating can only be applied to logic as these would destroy the contents of the memory if applied to a SRAM system. Fortunately, previous works have noted large temporal and spatial locality for data patterns in commerical processors as well as application specific ICs that work on images, audio and video data. This thesis presents a novel column based Energy Compression technique that saves SRAM power by selectively turning off cells based on a data pattern. This technique is applied to study the power savings in application specific inegrated circuit SRAM memories and can also be applied for commercial processors. The thesis also evaluates the effects of processing images before storage and data cluster patterns for optimizing power savings.
APA, Harvard, Vancouver, ISO, and other styles
41

Sorwar, Golam 1969. "A novel distance-dependent thresholding strategy for block-based performance scalability and true object motion estimation." Monash University, Gippsland School of Computing and Information Technology, 2003. http://arrow.monash.edu.au/hdl/1959.1/5510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Choi, Kai-san. "Automatic source camera identification by lens aberration and JPEG compression statistics." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B38902345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Choi, Kai-san, and 蔡啟新. "Automatic source camera identification by lens aberration and JPEG compression statistics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B38902345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Silva, Fernando Silvestre da. "Procedimentos para tratamento e compressão de imagens e video utilizando tecnologia fractal e transformadas wavelet." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260581.

Full text
Abstract:
Orientador: Yuzo Iano
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-05T13:46:30Z (GMT). No. of bitstreams: 1 Silva_FernandoSilvestreda_D.pdf: 35017484 bytes, checksum: fb460a6a42e44fe0a50e94599ac027fc (MD5) Previous issue date: 2005
Resumo: A excelente qualidade visual e taxa de compressão dos codificadores fractais de imagem tem aplicações limitadas devido ao exaustivo tempo de codificação inerente. Esta pesquisa apresenta um novo codificador híbrido de imagens que aplica a velocidade da transformada wavelet à qualidade da compressão fractal. Neste esquema, uma codificação fractal acelerada usando a classificação de domínios de Fisher é aplicada na sub-banda passa-baixas de uma imagem transformada por wavelet e uma versão modificada do SPIHT (Set Partitioned in Hierarchical Trees) é aplicada nos coeficientes remanescentes. Os detalhes de imagem e as características de transmissão progressiva da transformada wavelet são mantidas; nenhum efeito de bloco comuns às técnicas fractais é introduzido, e o problema de fidelidade de codificação comuns aos codificadores híbridos fractal-wavelet é resolvido. O sistema proposto reduz o tempo médio de processamento das imagens em 94% comparado com o codificador fractal tradicional e um ganho de 0 a 2,4dB em PSNR sobre o algoritmo SPIHT puro. Em ambos os casos, o novo esquema proposto apresentou melhorias em temos de qualidade subjetiva das imagens para altas, médias e baixas taxas de compressão
Abstract: The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This research presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. In this scheme, a fast fractal encoding using Fisher¿s domain classification is applied to the lowpass subband of wavelet transformed image and a modified SPIHT coding (Set Partitioning in Hierarchical Trees), on the remaining coefficients. The image details and wavelet progressive transmission characteristics are maintained; no blocking effects from fractal techniques are introduced; and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme provides an average of 94% reduction in encoding-decoding time compared to the pure accelerated Fractal coding results, and a 0-2,4dB gain in PSNR over the pure SPIHT coding. In both cases, the new scheme improves the subjective quality of pictures for high, medium and low bit rates
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
45

Cziraki, Suzanne Elizabeth. "The reproducibility and accuracy of cephalometric analysis using different digital imaging modalities and image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ63001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhao, Xin. "High efficiency coarse-grained customised dynamically reconfigurable architecture for digital image processing and compression technologies." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6187.

Full text
Abstract:
Digital image processing and compression technologies have significant market potential, especially the JPEG2000 standard which offers outstanding codestream flexibility and high compression ratio. Strong demand for high performance digital image processing and compression system solutions is forcing designers to seek proper architectures that offer competitive advantages in terms of all performance metrics, such as speed and power. Traditional architectures such as ASIC, FPGA and DSPs have limitations in either low flexibility or high power consumption. On the other hand, through the provision of a degree of flexibility similar to that of a DSP and performance and power consumption advantages approaching that of an ASIC, coarse-grained dynamically reconfigurable architectures are proving to be strong candidates for future high performance digital image processing and compression systems. This thesis investigates dynamically reconfigurable architectures and especially the newly emerging RICA paradigm. Case studies such as Reed- Solomon decoder and WiMAX OFDM timing synchronisation engine are implemented in order to explore the potential of RICA-based architectures and the possible optimisation approaches such as eliminating conditional branches, reducing memory accesses and constructing kernels. Based on investigations in this thesis, a novel customised dynamically reconfigurable architecture targeting digital image processing and compression applications is devised, which can be tailored to adopt different applications. A demosaicing engine based on the Freeman algorithm is designed and implemented on the proposed architecture as the pre-processing module in a digital imaging system. An efficient data buffer rotating scheme is designed with the aim of reducing memory accesses. Meanwhile an investigation targeting mapping the demosaicing engine onto a dual-core RICA platform is performed. After optimisation, the performance of the proposed engine is carefully evaluated and compared in aspects of throughput and consumed computational resources. When targeting the JPEG2000 standard, the core tasks such as 2-D Discrete Wavelet Transform (DWT) and Embedded Block Coding with Optimal Truncation (EBCOT) are implemented and optimised on the proposed architecture. A novel 2-D DWT architecture based on vector operations associated with RICA paradigm is developed, and the complete DWT application is highly optimised for both throughput and area. For the EBCOT implementation, a novel Partial Parallel Architecture (PPA) for the most computationally intensive module in EBCOT, termed Context Modeling (CM), is devised. Based on the algorithm evaluation, an ARM core is integrated into the proposed architecture for performance enhancement. A Ping-Pong memory switching mode with carefully designed communication scheme between RICA based architecture and ARM is proposed. Simulation results demonstrate that the proposed architecture for JPEG2000 offers significant advantage in throughput.
APA, Harvard, Vancouver, ISO, and other styles
47

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Full text
Abstract:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
APA, Harvard, Vancouver, ISO, and other styles
48

Gatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Qing, and 吳慶. "Object-based coding and transmission for plenoptic videos." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B39634279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sefara, Mamphoko Nelly. "Design of a forward error correction algorithm for a satellite modem." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52181.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: One of the problems with any deep space communication system is that information may be altered or lost during transmission due to channel noise. It is known that any damage to the bit stream may lead to objectionable visual quality distortion of images at the decoder. The purpose of this thesis is to design an error correction and data compression algorithm for image protection, which will allow the communication bandwidth to be better utilized. The work focuses on Sunsat (Stellenbosch Satellite) images as test images. Investigations were done on the JPEG 2000 compression algorithm's robustness to random errors, putting more emphasis on how much of the image is degraded after compression. Implementation of both the error control coding and data compression strategy is then applied to a set of test images. The FEe algorithm combats some if not all of the simulated random errors introduced by the channel. The results illustrates that the error correction of random errors is achieved by a factor of 100 times (xl00) on all test images and that the probability of error of 10-2in the channel (10-4for image data) shows that the errors causes little degradation on the image quality.
AFRIKAANSE OPSOMMING: Een van die probleme met kommunikasie in die ruimte is dat informasie mag verlore gaan en! of gekorrupteer word deur ruis gedurende versending deur die kanaal. Dit is bekend dat enige skade aan die bisstroom mag lei tot hinderlike vervorming van die beelde wat op aarde ontvang word. Die doel van hierdie tesis om foutkorreksie en datakompressie te ontwikkel wat die satelliet beelde sal beskerm gedurende versending en die kommunikasie kanaal se bandwydte beter sal benut. Die werk fokus op SUNSAT (Stellenbosch Universiteit Satelliet) se beelde as toetsbeelde. Ondersoeke is gedoen na die JPEG2000 kompressie algoritme se bestandheid teen toevalsfoute, met klem op hoeveel die beeld gedegradeer word deur die bisfoute wat voorkom. Beide die kompressie en die foutkorreksie is ge-implementeer en aangewend op die toetsbeelde. Die foutkorreksie bestry die gesimuleerde toevalsfoute, soos wat dit op die kanaal voorkom. Die resultate toon dat die foutkorreksie die toevalsfoute met 'n faktor 100 verminder, en dat 'n foutwaarskynlikheid van 10-2 op die kanaal (10-4 op die beelddata) weinig degradering in die beeldkwaliteit veroorsaak.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography