To see the other types of publications on this topic, follow the link: Data compression (Telecommunication).

Dissertations / Theses on the topic 'Data compression (Telecommunication)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data compression (Telecommunication).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Walker, Wendy Tolle 1959. "Video data compression for telescience." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276830.

Full text
Abstract:
This paper recommends techniques to use for data compression of video data used to point a telescope and from a camera observing a robot, for transmission from the proposed U.S. Space Station to Earth. The mathematical basis of data compression is presented, followed by a general review of data compression techniques. A technique that has wide-spread use in data compression of videoconferencing images is recommended for the robot observation data. Bit rates of 60 to 400 kbits/sec can be achieved. Several techniques are modelled to find a best technique for the telescope data. Actual starfield images are used for the evaluation. The best technique is chosen on the basis of which model provides the most compression while preserving the important information in the images. Compression from 8 bits per pel to 0.015 bits per pel is achieved.
APA, Harvard, Vancouver, ISO, and other styles
2

Aydinoğlu, Behçet Halûk. "Stereo image compression." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chaulklin, Douglas Gary. "Evaluation of ANSI compression in a bulk data file transfer system." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-01202010-020213/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Mo. "Data compression for inference tasks in wireless sensor networks." Diss., Online access via UMI:, 2006.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Electrical Engineering, Thomas J. Watson School of Engineering and Applied Science, 2006.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
5

Goodenow, Daniel P. "A reference guide to JPEG compression /." Online version of thesis, 1993. http://hdl.handle.net/1850/11714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jones, Greg 1963-2017. "RADIX 95n: Binary-to-Text Data Conversion." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc500582/.

Full text
Abstract:
This paper presents Radix 95n, a binary to text data conversion algorithm. Radix 95n (base 95) is a variable length encoding scheme that offers slightly better efficiency than is available with conventional fixed length encoding procedures. Radix 95n advances previous techniques by allowing a greater pool of 7-bit combinations to be made available for 8-bit data translation. Since 8-bit data (i.e. binary files) can prove to be difficult to transfer over 7-bit networks, the Radix 95n conversion technique provides a way to convert data such as compiled programs or graphic images to printable ASCII characters and allows for their transfer over 7-bit networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Srinivas, Bindignavile S. "Progressive image and video transmission with error concealment on burst error channels and lossy packet networks /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Subramaniam, Suresh. "All-optical networks with sparse wavelength conversion /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Goldschneider, Jill R. "Lossy compression of scientific data via wavelets and vector quantization /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/5881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Smith, Craig M. "Efficient software implementation of the JBIG compression standard /." Online version of thesis, 1993. http://hdl.handle.net/1850/11713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Johnson, Mary Holland. "Low bit rate compression of Marine imagery using fast ECVQ /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lum, Randall M. G. "Differential pulse code modulation data compression." Scholarly Commons, 1989. https://scholarlycommons.pacific.edu/uop_etds/2181.

Full text
Abstract:
With the requirement to store and transmit information efficiently, an ever increasing number of uses of data compression techniques have been generated in diverse fields such as television, surveillance, remote sensing, medical processing, office automation, and robotics. Rapid increases in processing capabilities and the speed of complex integrated circuits make data compression techniques a prime candidate for application in the areas mentioned above. This report addresses, from a theoretical viewpoint, three major data compression techniques, Pixel Coding, Predictive Coding, and Transform Coding. It begins with a project description and continues with data compression techniques, focusing on Differential Pulse Code Modulation.
APA, Harvard, Vancouver, ISO, and other styles
13

Tsoi, Yiu-lun Kelvin, and 蔡耀倫. "Real-time scheduling techniques with QoS support and their applications in packet video transmission." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Teng, Yan. "Objective speech intelligibility assessment using speech recognition and bigram statistics with application to low bit-rate codec evaluation." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1456283581&sid=5&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Danyali, Habibollah. "Highly scalable wavelet image and video coding for transmission over heterogeneous networks." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041027.115306/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chu, Chung Cheung. "Tree encoding of speech signals at low bit rates." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Jingyun. "Lapped transforms based on DLS and DLC basis functions and applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ30101.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Khan, Mohammad Asmat Ullah. "Trellis-coded residual vector quantization." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Huan-sheng. "Fast search techniques for video motion estimation and vector quantization." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chan, Ho Yin. "Graph-theoretic approach to the non-binary index assignment problem /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20CHAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hu, Yichuan. "Analog non-linear coding for improved performance in compressed sensing." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 76 p, 2009. http://proquest.umi.com/pqdweb?did=1885755731&sid=5&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tang, Sai-kin Owen. "Implementation of Low bit-rate image codec /." [Hong Kong] : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14804402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.
AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
APA, Harvard, Vancouver, ISO, and other styles
24

Dong, Liqin Carleton University Dissertation Engineering Electrical. "Compressed voice in integrated services frame relay networks." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
25

Klausutis, Timothy J. "Adaptive lapped transforms with applications to image coding." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Teixeira, Márlon Amaro Coelho. "Novas abordagens para compressão de documentos XML." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259454.

Full text
Abstract:
Orientador: Leonardo de Souza Mendes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-19T13:11:21Z (GMT). No. of bitstreams: 1 Teixeira_MarlonAmaroCoelho_M.pdf: 1086156 bytes, checksum: 1acfaaf659e42716010448c6781b6313 (MD5) Previous issue date: 2011
Resumo: Atualmente, alguns dos fatores que determinam o sucesso ou fracasso das corporações estão ligados a velocidade e a eficiência da tomada de suas decisões. Para que estes quesitos sejam alcançados, a integração dos sistemas computacionais legados aos novos sistemas computacionais é de fundamental importância, criando assim a necessidade de que velhas e novas tecnologias interoperem. Como solução a este problema surge a linguagem XML, uma linguagem auto-descritiva, independente de tecnologia e plataforma, que vem se tornando um padrão de comunicação entre sistemas heterogêneos. Por ser auto-descritiva, a XML se torna redundante, o que gera mais informações a ser transferida e armazenada, exigindo mais recursos dos sistemas computacionais. Este trabalho consiste em apresentar novas abordagens de compressão específicas para a linguagem XML, com o objetivo de reduzir o tamanho de seus documentos, diminuindo os impactos sobre os recursos de rede, armazenamento e processamento. São apresentadas 2 novas abordagens, assim como os casos de testes que as avaliam, considerando os quesitos: taxa de compressão, tempo de compressão e tolerância dos métodos a baixas disponibilidades de memória. Os resultados obtidos são comparados aos métodos de compressão de XML que se destacam na literatura. Os resultados demonstram que a utilização de compressores de documentos XML pode reduzir consideravelmente os impactos de desempenho criados pela linguagem
Abstract: Actually, some of the factors that determine success or failure of a corporation are on the speed and efficiency of making their decisions. For these requirements are achieved, the integration of legacy computational systems to new computational systems is of fundamental importance, thus creating the need for old and new technologies interoperate. As a solution to this problem comes to XML, a language self-descriptive and platform-independent technology, and it is becoming a standard for communication between heterogeneous systems. Being self-descriptive, the XML becomes redundant, which generates more information to be transferred and stored, requiring more resources of computational systems. This work presents new approaches to specific compression for XML, in order to reduce the size of your documents, reducing the impacts on the reducing the impact on network, storage and processing resources. Are presented two new approaches as well as test cases that evaluate, considering the questions: compression ratio, compression time and tolerance of the methods to low memory availability. The results are compared to the XML compression methods that stand out in the literature. The results demonstrate that the use of compressors XML documents can significantly reduce the performance impacts created by the language
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
27

Engel, Adalbert. "Bandwidth management and quality of service." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2000. https://ro.ecu.edu.au/theses/1540.

Full text
Abstract:
With the advent of bandwidth-hungry video and audio applications, demand for bandwidth is expected to exceed supply. Users will require more bandwidth and, as always, there are likely to be more users. As the Internet user base becomes more diverse, there is an increasing perception that Internet Service Providers (ISPs) should be able to differentiate between users, so that the specific needs of different types of users can be met. Differentiated services is seen as a possible solution to the bandwidth problem. Currently, however, the technology used on the Internet differentiates neither between users, nor between applications. The thesis focuses on current and anticipated bandwidth shortages on the Internet, and on the lack of a differentiated service. The aim is to identify methods of managing bandwidth and to investigate how these bandwidth management methods can be used to provide a differentiated service. The scope of the study is limited to networks using both Ethernet technology and the Internet Protocol (IP). Tile study is significant because it addresses current problems confronted by network managers. The key terms, Quality of Service (QoS) and bandwidth management, are defined. “QoS” is equated to a differentiating system. Bandwidth management is defined as any method of controlling and allocating bandwidth. "Installing more capacity" is taken to be a method of bandwidth management. The review of literature concentrates on Ethernet/IP networks. It begins with a detailed examination of definitions and interpretations of the term "Quality of Service" and shows how the meaning changed over the last decade. The review then examines congestion control, including a survey of queuing methods. Priority queuing implemented in hardware is examined in detail, followed by a review of the ReSource reserVation Protocol (RSVP) and a new version of IP (lPv6). Finally, the new standards IEEE 802.1p and IEEE 802.1Q are outlined, and parts of ISO/IEC 15802-3 are analysed. The Integrated Services Architecture (ISA), Differentiated Services (DiffServ) and MultiProtocol Label Switching (MPLS) are seen as providing a theoretical framework for QoS development. The Open Systems Interconnection Reference Model (OSI model) is chosen as the preferred framework for investigating bandwidth management because it is more comprehensive than the alternative US Department of Defence Model (DoD model). A case study of the Edith Cowan University (ECU) data network illustrates current practice in network management. It provides concrete examples of some of the problems, methods and solutions identified in the literary review. Bandwidth management methods are identified and categorised based on the OSI layers in which they operate. Suggestions are given as to how some of these bandwidth management methods are, or can be used within current QoS architectures. The experimental work consists of two series of tests on small, experimental LANs. The tests are aimed at evaluating the effectiveness of IEEE 802.1 p prioritisation. The results suggest that in small Local Area Networks (LANs) prioritisation provides no benefit when Ethernet switches are lightly loaded.
APA, Harvard, Vancouver, ISO, and other styles
28

Nzeugaing, Gutembert Nganpet. "Image compression system for a 3u cubesat." Thesis, Cape Peninsula University of Technology, 2013. http://hdl.handle.net/20.500.11838/1085.

Full text
Abstract:
Thesis submitted in partial fulfilment of the requirements for the degree of Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2013
Earth observation satellites utilise sensors or cameras to capture data or images that are relayed to the ground station(s). The ZACUBE-02 CubeSat currently in development at the French South African Institute of Technology (F’SATI) contains a high resolution 5 megapixel on-board camera. The purpose of the camera is to capture images of Earth and relay them to the ground station once communication is established. The captured images, which can amount to a large volume of data, have to be stored on-board as the CubeSat awaits the next cycle of transmission to the ground station. This mode of operation introduces a number of problems, as the CubeSat has limited storage and memory capacity and is not able to store large amounts of data. This, together with the limitation of the downlink capacity, has set the need for the design and development of an image compression system suitable for the CubeSat environment. Image compression focuses on reducing the size of images to be stored as well as reducing the size of the images to be transmitted to the ground station. The purpose of the study is to propose a compression system to be implemented on ZACUBE-02. An intensive study of current, proposed and implemented compression methods, algorithms and techniques as well as the CubeSat specification, served as input for defining the requirements for such a system. The proposed design is a combination of image segmentation, image linearization and image entropy coding (run-length coding). This combination technique is implemented in order to achieve lossless image compression. For the proposed design, a compression ratio of 10:1 was obtained without negatively affecting image quality.The on-board storage memory constraints, the power constraints and the bandwidth constraints are met with the implementation of the proposed design, resulting in the downlink transmission time being minimised. Within the study a number of objectives were met in order to design, implement and test the compression system. These included a detailed study of image compression techniques; a look into techniques for improving the compression ratio; and a study of industrial hardware components suitable for the space environment. Keywords: CubeSat, hardware, compression, satellite image compression, Gumstix Overo Water, ZACUBE-02.
APA, Harvard, Vancouver, ISO, and other styles
29

鄧世健 and Sai-kin Owen Tang. "Implementation of Low bit-rate image codec." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Muller, Rikus. "A study of image compression techniques, with specific focus on weighted finite automata." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Sefara, Mamphoko Nelly. "Design of a forward error correction algorithm for a satellite modem." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52181.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: One of the problems with any deep space communication system is that information may be altered or lost during transmission due to channel noise. It is known that any damage to the bit stream may lead to objectionable visual quality distortion of images at the decoder. The purpose of this thesis is to design an error correction and data compression algorithm for image protection, which will allow the communication bandwidth to be better utilized. The work focuses on Sunsat (Stellenbosch Satellite) images as test images. Investigations were done on the JPEG 2000 compression algorithm's robustness to random errors, putting more emphasis on how much of the image is degraded after compression. Implementation of both the error control coding and data compression strategy is then applied to a set of test images. The FEe algorithm combats some if not all of the simulated random errors introduced by the channel. The results illustrates that the error correction of random errors is achieved by a factor of 100 times (xl00) on all test images and that the probability of error of 10-2in the channel (10-4for image data) shows that the errors causes little degradation on the image quality.
AFRIKAANSE OPSOMMING: Een van die probleme met kommunikasie in die ruimte is dat informasie mag verlore gaan en! of gekorrupteer word deur ruis gedurende versending deur die kanaal. Dit is bekend dat enige skade aan die bisstroom mag lei tot hinderlike vervorming van die beelde wat op aarde ontvang word. Die doel van hierdie tesis om foutkorreksie en datakompressie te ontwikkel wat die satelliet beelde sal beskerm gedurende versending en die kommunikasie kanaal se bandwydte beter sal benut. Die werk fokus op SUNSAT (Stellenbosch Universiteit Satelliet) se beelde as toetsbeelde. Ondersoeke is gedoen na die JPEG2000 kompressie algoritme se bestandheid teen toevalsfoute, met klem op hoeveel die beeld gedegradeer word deur die bisfoute wat voorkom. Beide die kompressie en die foutkorreksie is ge-implementeer en aangewend op die toetsbeelde. Die foutkorreksie bestry die gesimuleerde toevalsfoute, soos wat dit op die kanaal voorkom. Die resultate toon dat die foutkorreksie die toevalsfoute met 'n faktor 100 verminder, en dat 'n foutwaarskynlikheid van 10-2 op die kanaal (10-4 op die beelddata) weinig degradering in die beeldkwaliteit veroorsaak.
APA, Harvard, Vancouver, ISO, and other styles
32

Araujo, André Filgueiras de. "Uma proposta de estimação de movimento para o codificador de vídeo Dirac." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261689.

Full text
Abstract:
Orientador: Yuzo Iano
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T03:46:01Z (GMT). No. of bitstreams: 1 Araujo_AndreFilgueirasde_M.pdf: 3583920 bytes, checksum: afbfc9cf561651fe74a6a3d075474fc8 (MD5) Previous issue date: 2010
Resumo: Este trabalho tem como objetivo principal a elaboração de um novo algoritmo responsável por tornar mais eficiente a estimação de movimento do codec Dirac. A estimação de movimento é uma etapa crítica à codificação de vídeo, na qual se encontra a maior parte do seu processamento. O codec Dirac, recentemente lançado, tem como base técnicas diferentes das habitualmente utilizadas nos codecs mais comuns (como os da linha MPEG). O Dirac objetiva alcançar eficiência comparável aos melhores codecs da atualidade (notadamente o H.264/AVC). Desta forma, este trabalho apresenta inicialmente estudos comparativos visando à avaliação de métodos de estado da arte de estimação de movimento e do codec Dirac, estudos que fornecem a base de conhecimento para o algoritmo que é proposto na sequência. A proposta consiste no algoritmo Modified Hierarchical Enhanced Adaptive Rood Pattern Search (MHEARPS). Este apresenta desempenho superior aos outros algoritmos de relevância em todos os casos analisados, provendo em média complexidade 79% menor mantendo a qualidade de reconstrução.
Abstract: The main purpose of this work is to design a new algorithm which enhance motion estimation in Dirac video codec. Motion estimation is a critical stage in video coding, in which most of the processing lies. Dirac codec, recently released, is based on techniques different from the usually employed (as in MPEG-based codecs). Dirac video codec aims at achieving efficiency comparable to the best codecs (such as H.264/AVC). This work initially presents comparative studies of state-of-the-art motion estimation techniques and Dirac codec which support the conception of the algorithm which is proposed in the sequel. The proposal consists in the algorithm Modified Hierarchical Enhaced Adaptive Rood Pattern Search (MHEARPS). This presents superior performance when compared to other relevant algorithms in every analysed case, providing on average 79% less computations with similar video reconstruction quality.
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
33

Sankara, Krishnan Shivaranjani. "Delay sensitive delivery of rich images over WLAN in telemedicine applications." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29673.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
34

Cao, Libo. "Nonlinear Wavelet Compression Methods for Ion Analyses and Dynamic Modeling of Complex Systems." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1107790393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Silva, Fernando Silvestre da. "Procedimentos para tratamento e compressão de imagens e video utilizando tecnologia fractal e transformadas wavelet." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260581.

Full text
Abstract:
Orientador: Yuzo Iano
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-05T13:46:30Z (GMT). No. of bitstreams: 1 Silva_FernandoSilvestreda_D.pdf: 35017484 bytes, checksum: fb460a6a42e44fe0a50e94599ac027fc (MD5) Previous issue date: 2005
Resumo: A excelente qualidade visual e taxa de compressão dos codificadores fractais de imagem tem aplicações limitadas devido ao exaustivo tempo de codificação inerente. Esta pesquisa apresenta um novo codificador híbrido de imagens que aplica a velocidade da transformada wavelet à qualidade da compressão fractal. Neste esquema, uma codificação fractal acelerada usando a classificação de domínios de Fisher é aplicada na sub-banda passa-baixas de uma imagem transformada por wavelet e uma versão modificada do SPIHT (Set Partitioned in Hierarchical Trees) é aplicada nos coeficientes remanescentes. Os detalhes de imagem e as características de transmissão progressiva da transformada wavelet são mantidas; nenhum efeito de bloco comuns às técnicas fractais é introduzido, e o problema de fidelidade de codificação comuns aos codificadores híbridos fractal-wavelet é resolvido. O sistema proposto reduz o tempo médio de processamento das imagens em 94% comparado com o codificador fractal tradicional e um ganho de 0 a 2,4dB em PSNR sobre o algoritmo SPIHT puro. Em ambos os casos, o novo esquema proposto apresentou melhorias em temos de qualidade subjetiva das imagens para altas, médias e baixas taxas de compressão
Abstract: The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This research presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. In this scheme, a fast fractal encoding using Fisher¿s domain classification is applied to the lowpass subband of wavelet transformed image and a modified SPIHT coding (Set Partitioning in Hierarchical Trees), on the remaining coefficients. The image details and wavelet progressive transmission characteristics are maintained; no blocking effects from fractal techniques are introduced; and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme provides an average of 94% reduction in encoding-decoding time compared to the pure accelerated Fractal coding results, and a 0-2,4dB gain in PSNR over the pure SPIHT coding. In both cases, the new scheme improves the subjective quality of pictures for high, medium and low bit rates
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Jian Electrical Engineering Australian Defence Force Academy UNSW. "Error resilience for video coding services over packet-based networks." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Electrical Engineering, 1999. http://handle.unsw.edu.au/1959.4/38652.

Full text
Abstract:
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
APA, Harvard, Vancouver, ISO, and other styles
37

Ali, Khan Syed Irteza. "Classification using residual vector quantization." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50300.

Full text
Abstract:
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
APA, Harvard, Vancouver, ISO, and other styles
38

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Full text
Abstract:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
APA, Harvard, Vancouver, ISO, and other styles
39

Ribeiro, Moises Vidal. "Tecnicas de processamento de sinais aplicadas a transmissão de dados via rede eletrica e ao monitoramento da qualidade de energia." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261263.

Full text
Abstract:
Orientador: João Marcos Travassos Romano
Tese (doutorado) - Universidade Estadual de Campinas. Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T03:58:12Z (GMT). No. of bitstreams: 1 Ribeiro_MoisesVidal_D.pdf: 5330417 bytes, checksum: ebf89b90c9327ce0ba7f3c169b5e260f (MD5) Previous issue date: 2005
Resumo: A presente tese tem por objetivo propor e discutir o uso de algumas técnicas de processamento de sinais e de inteligência computacional para a melhoria da transmissão digital de dados via redes elétricas e da análise da qualidade da energia elétrica em sistemas de potência. No que tange à transmissão de dados via rede elétrica, novas técnicas são introduzidas para solucionar os problemas de cancelamento de ruídos impulsivos e equalização de canais de comunicação. Para a melhoria do monitoramento da qualidade da energia elétrica, propõem-se novas técnicas para a análise espectral das componentes fundamental e harmônicas, e para a detecção, a classificação e a compressão de distúrbios. As várias técnicas apresentadas no presente trabalho são fundamentadas no princípio de dividir e conquistar, largamente utilizado em diversas áreas do conhecimento. A aplicação adequada desse princípio através de técnicas de processamento de sinais e de inteligência computacional nos permitiram fornecer análises mais precisas dos problemas estudados e propor novas soluções para os mesmos. Os resultados numéricos obtidos nas simulações computacionais confirmam a relevância das técnicas propostas
Abstract: This thesis is aimed at proposing and discussing the use of signal processing and computational intelligence techniques to improve digital communications through power line channels and a more precise power quality analysis of power systems. Regarding power line communication applications, advanced techniques for impulse noise mitigation and channel equalization are introduced. For power quality monitoring applications, novel techniques are proposed for spectral analysis of power line signals and for detection, classification and compression of disturbance events. The techniques proposed are developed on the light of the divider and conquer principle. The appropriate application of such principle, by means of signal processing and computational intelligence techniques, enable us to offering a more precise analysis of the problems investigated and novel solutions for them. By introducing a set of signal processing techniques along with some computational intelligence ones, this contribution succeeds in offering improvements for all the problems investigated. Numerical results obtained by computational simulations verify such improvement and confirm the relevance of the techniques proposed.
Doutorado
Telecomunicações
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
40

"A robust low bit rate quad-band excitation LSP vocoder." Chinese University of Hong Kong, 1994. http://library.cuhk.edu.hk/record=b5888223.

Full text
Abstract:
by Chiu Kim Ming.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1994.
Includes bibliographical references (leaves 103-108).
Chapter Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Speech production --- p.2
Chapter 1.2 --- Low bit rate speech coding --- p.4
Chapter Chapter 2 --- Speech analysis & synthesis --- p.8
Chapter 2.1 --- Linear prediction of speech signal --- p.8
Chapter 2.2 --- LPC vocoder --- p.11
Chapter 2.2.1 --- Pitch and voiced/unvoiced decision --- p.11
Chapter 2.2.2 --- Spectral envelope representation --- p.15
Chapter 2.3 --- Excitation --- p.16
Chapter 2.3.1 --- Regular pulse excitation and Multipulse excitation --- p.16
Chapter 2.3.2 --- Coded excitation and vector sum excitation --- p.19
Chapter 2.4 --- Multiband excitation --- p.22
Chapter 2.5 --- Multiband excitation vocoder --- p.25
Chapter Chapter 3 --- Dual-band and Quad-band excitation --- p.31
Chapter 3.1 --- Dual-band excitation --- p.31
Chapter 3.2 --- Quad-band excitation --- p.37
Chapter 3.3 --- Parameters determination --- p.41
Chapter 3.3.1 --- Pitch detection --- p.41
Chapter 3.3.2 --- Voiced/unvoiced pattern generation --- p.43
Chapter 3.4 --- Excitation generation --- p.47
Chapter Chapter 4 --- A low bit rate Quad-Band Excitation LSP Vocoder --- p.51
Chapter 4.1 --- Architecture of QBELSP vocoder --- p.51
Chapter 4.2 --- Coding of excitation parameters --- p.58
Chapter 4.2.1 --- Coding of pitch value --- p.58
Chapter 4.2.2 --- Coding of voiced/unvoiced pattern --- p.60
Chapter 4.3 --- Spectral envelope estimation and coding --- p.62
Chapter 4.3.1 --- Spectral envelope & the gain value --- p.62
Chapter 4.3.2 --- Line Spectral Pairs (LSP) --- p.63
Chapter 4.3.3 --- Coding of LSP frequencies --- p.68
Chapter 4.3.4 --- Coding of gain value --- p.77
Chapter Chapter 5 --- Performance evaluation --- p.80
Chapter 5.1 --- Spectral analysis --- p.80
Chapter 5.2 --- Subjective listening test --- p.93
Chapter 5.2.1 --- Mean Opinion Score (MOS) --- p.93
Chapter 5.2.2 --- Diagnostic Rhyme Test (DRT) --- p.96
Chapter Chapter 6 --- Conclusions and discussions --- p.99
References --- p.103
Appendix A Subroutine of pitch detection --- p.A-I - A-III
Appendix B Subroutine of voiced/unvoiced decision --- p.B-I - B-V
Appendix C Subroutine of LPC coefficients calculation using Durbin's recursive method --- p.C-I - C-II
Appendix D Subroutine of LSP calculation using Chebyshev Polynomials --- p.D-I - D-III
Appendix E Single syllable word pairs for Diagnostic Rhyme Test --- p.E-I
APA, Harvard, Vancouver, ISO, and other styles
41

Menezes, Vinod. "Video Compression Through Spatial Frequency Based Motion Estimation And Compensation." Thesis, 1996. https://etd.iisc.ac.in/handle/2005/1900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Menezes, Vinod. "Video Compression Through Spatial Frequency Based Motion Estimation And Compensation." Thesis, 1996. http://etd.iisc.ernet.in/handle/2005/1900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

"Associative neural networks: properties, learning, and applications." Chinese University of Hong Kong, 1994. http://library.cuhk.edu.hk/record=b5888340.

Full text
Abstract:
by Chi-sing Leung.
Thesis (Ph.D.)--Chinese University of Hong Kong, 1994.
Includes bibliographical references (leaves 236-244).
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Background of Associative Neural Networks --- p.1
Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3
Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6
Chapter 1.4 --- Scope and Organization --- p.9
Chapter 1.5 --- Summary of Publications --- p.13
Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17
Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18
Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18
Chapter 2.2 --- Recall Process of BAM --- p.20
Chapter 2.3 --- Stability of BAM --- p.22
Chapter 2.4 --- Memory Capacity of BAM --- p.24
Chapter 2.5 --- Error Correction Capability of BAM --- p.28
Chapter 2.6 --- Chapter Summary --- p.29
Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31
Chapter 3.1 --- Introduction --- p.31
Chapter 3.2 --- Existence of Energy Barrier --- p.34
Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44
Chapter 3.4 --- Confidence Dynamics --- p.49
Chapter 3.5 --- Numerical Results from the Dynamics --- p.63
Chapter 3.6 --- Chapter Summary --- p.68
Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70
Chapter 4.1 --- Introduction --- p.70
Chapter 4.2 --- Second order BAM and its Stability --- p.71
Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75
Chapter 4.4 --- Numerical Results --- p.82
Chapter 4.5 --- Extension to higher order BAM --- p.90
Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94
Chapter 4.7 --- Chapter Summary --- p.95
Chapter 5 --- Enhancement of BAM --- p.97
Chapter 5.1 --- Background --- p.97
Chapter 5.2 --- Review on Modifications of BAM --- p.101
Chapter 5.2.1 --- Change of the encoding method --- p.101
Chapter 5.2.2 --- Change of the topology --- p.105
Chapter 5.3 --- Householder Encoding Algorithm --- p.107
Chapter 5.3.1 --- Construction from Householder Transforms --- p.107
Chapter 5.3.2 --- Construction from iterative method --- p.109
Chapter 5.3.3 --- Remarks on HCA --- p.111
Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112
Chapter 5.4.1 --- Construction of EHCA --- p.112
Chapter 5.4.2 --- Remarks on EHCA --- p.114
Chapter 5.5 --- Bidirectional Learning --- p.115
Chapter 5.5.1 --- Construction of BL --- p.115
Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116
Chapter 5.5.3 --- Remarks on BL --- p.120
Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121
Chapter 5.6.1 --- Construction of AHKBL --- p.121
Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124
Chapter 5.6.3 --- Remarks on AHKBL --- p.125
Chapter 5.7 --- Computer Simulations --- p.126
Chapter 5.7.1 --- Memory Capacity --- p.126
Chapter 5.7.2 --- Error Correction Capability --- p.130
Chapter 5.7.3 --- Learning Speed --- p.157
Chapter 5.8 --- Chapter Summary --- p.158
Chapter 6 --- BAM under Forgetting Learning --- p.160
Chapter 6.1 --- Introduction --- p.160
Chapter 6.2 --- Properties of Forgetting Learning --- p.162
Chapter 6.3 --- Computer Simulations --- p.168
Chapter 6.4 --- Chapter Summary --- p.168
Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170
Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171
Chapter 7.1 --- Background on Vector quantization --- p.171
Chapter 7.2 --- Introduction to LBG algorithm --- p.173
Chapter 7.3 --- Introduction to Kohonen Map --- p.174
Chapter 7.4 --- Chapter Summary --- p.179
Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181
Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182
Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182
Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188
Chapter 8.1.3 --- Computer Simulations --- p.191
Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195
Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195
Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198
Chapter 8.2.3 --- Computer Simulations --- p.200
Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213
Chapter 8.3.1 --- Motivation and Background --- p.214
Chapter 8.3.2 --- Trellis Coded Modulation --- p.216
Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220
Chapter 8.3.4 --- Computer Simulations --- p.223
Chapter 8.4 --- Chapter Summary --- p.226
Chapter 9 --- Conclusion --- p.232
Bibliography --- p.236
APA, Harvard, Vancouver, ISO, and other styles
44

Thanh, V. T. Kieu(Vien Tat Kieu). "Post-processing of JPEG decompressed images." Thesis, 2002. https://eprints.utas.edu.au/22088/1/whole_ThanhVienTatKieu2002_thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bashala, Jenny Mwilambwe. "Development of a new image compression technique using a grid smoothing technique." 2013. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001112.

Full text
Abstract:
M. Tech. Electrical Engineering.
Aims to implement a lossy image compression scheme that uses a graph-based approach. On the one hand, this new method should reach high compression rates with good visual quality, while on the other hand it may lead to the following sub-problems:efficient classification of image data with the use of bilateral mesh filtering ; Transformation of the image into a graph with grid smoothing ; reduction of the graph by means of mesh decimation techniques ; reconstruction process of the reduced graph into an image and quality analysis of the reconstructed images.
APA, Harvard, Vancouver, ISO, and other styles
46

"Image coding with a lapped orthogonal transform." Chinese University of Hong Kong, 1993. http://library.cuhk.edu.hk/record=b5887718.

Full text
Abstract:
by Patrick Chi-man Fung.
Thesis (M.Sc.)--Chinese University of Hong Kong, 1993.
Includes bibliographical references (leaves 57-58).
LIST OF FIGURES
LIST OF IMAGES
LIST OF TABLES
NOTATIONS
Chapter 1 --- INTRODUCTION --- p.1
Chapter 2 --- THEORY --- p.3
Chapter 2.1 --- Matrix Representation of LOT --- p.3
Chapter 2.2 --- Feasibility of LOT --- p.5
Chapter 2.3 --- Properties of Good Feasible LOT --- p.6
Chapter 2.4 --- An Optimal LOT --- p.7
Chapter 2.5 --- Approximation of an Optimal LOT --- p.10
Chapter 2.6 --- Representation of an Approximately Optimal LOT --- p.13
Chapter 3 --- IMPLEMENTATION --- p.17
Chapter 3.1 --- Mathematical Background --- p.17
Chapter 3.2 --- Analysis of LOT Flowgraph --- p.17
Chapter 3.2.1 --- The Fundamental LOT Building Block --- p.17
Chapter 3.2.2 --- +1/-1 Butterflies --- p.19
Chapter 3.3 --- Conclusion --- p.25
Chapter 4 --- RESULTS --- p.27
Chapter 4.1 --- Objective of Energy Packing --- p.27
Chapter 4.2 --- Nature of Target Images --- p.27
Chapter 4.3 --- Methodology of LOT Coefficient Selection --- p.28
Chapter 4.4 --- dB RMS Error in Pixel Values --- p.29
Chapter 4.5 --- Negative Pixel Values in Reverse LOT --- p.30
Chapter 4.6 --- LOT Coefficient Energy Distribution --- p.30
Chapter 4.7 --- Experimental Data --- p.32
Chapter 5 --- DISCUSSION AND CONCLUSIONS --- p.46
Chapter 5.1 --- RMS Error (dB) and LOT Coeffs. Drop Ratio --- p.46
Chapter 5.1.1 --- Numeric Experimental Results --- p.46
Chapter 5.1.2 --- Human Visual Response --- p.46
Chapter 5.1.3 --- Conclusion --- p.49
Chapter 5.2 --- Number of Negative Pixel Values in RLOT --- p.50
Chapter 5.3 --- LOT Coefficient Energy Distribution --- p.51
Chapter 5.4 --- Effect of Changing the Block Size --- p.54
REFERENCES --- p.57
APPENDIX
Tables of Experimental Data --- p.59
APA, Harvard, Vancouver, ISO, and other styles
47

Natu, Ambarish Shrikrishna. "Error resilience in JPEG2000 /." 2003. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20030519.163058/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bhutani, Meeta. "Comparison of DPCM and Subband Codec performance in the presence of burst errors." Thesis, 1998. http://hdl.handle.net/1957/33508.

Full text
Abstract:
This thesis is a preliminary study of the relative performance of the major speech compression techniques, Differential Pulse Code Modulation (DPCM) and Subband Coding (SBC) in the presence of transmission distortion. The combined effect of the channel distortions and the channel codec including error correction is represented by bursts of bit errors. While compression is critical since bandwidth is scarce in a wireless channel, channel distortions are greater and less predictable. Little to no work has addressed the impact of channel errors on perceptual quality of speech due to the complexity of the problem. At the transmitter, the input signal is compressed to 24 kbps using either DPCM or SBC, quantized, binary encoded and transmitted over the burst error channel. The reverse process is carried out at the receiver. DPCM achieves compression by removing redundant information in successive time domain samples, while SBC uses lower resolution quantizer to encode frequency bands of lower perceptual importance. The performance of these codecs is evaluated for BERs of 0.001 and 0.05, with the burst lengths varying between 4 and 64 bits. Two different speech segments - one voiced and one unvoiced are used in testing. Performance measures include two objective tests signal to noise ratio (SNR) & segmental SNR, and a subjective test of perceptual quality - the Mean Opinion Score (MOS). The results obtained show that with a fixed BER and increasing burst length in bits, the total errors reduce in the decoded speech thereby improving its perceptual quality for both DPCM and SBC. Informal subjective tests also demonstrate this trend as well as indicate distortion in DPCM seemed to be less perceptually degrading than SBC.
Graduation date: 1999
APA, Harvard, Vancouver, ISO, and other styles
49

McLaren, David L(David Lionel). "Video and image coding for broadband integrated services digital networks." Thesis, 1992. https://eprints.utas.edu.au/20309/1/whole_McLarenDavidLionel1993_thesis.pdf.

Full text
Abstract:
The growing demand for visual telecommunication services over the last decade has greatly increased the need for efficient image coding and compression schemes. The work presented in this thesis examines several aspects of the problem of coding and compressing the various image and video-based services which are likely to utilize Broadband Integrated Services Digital Networks (BISDNs) in the future. This research has two major thrusts. The first is the development of a general-purpose, high-performance and high-quality image coding and compression scheme for these broadband-based visual services. The second is the development of an accurate traffic source modelling scheme for the Variable BitRate (VBR) packet video traffic produced by the proposed coding scheme. The proposed high-performance visual coding and compression scheme combines both statistical and psychovisual coding techniques to produce an optimum scheme which removes both statistical and psychovisual redundancies from images in the coding process. When used to encode four standard 512x512x8-bit test images, this scheme results in sub-distortion compression ratios of up to 27:1 without the use of any form of interframe coding. An efficient and flexible Asynchronous Transfer Mode (ATM) cell packaging scheme is also developed which allows the production of either 'priority' or 'non-priority' cell streams suitable for transmission over broadband networks. Three suitable VBR packet video traffic source models are developed during this study. Each of these sources reproduces the 'low-level' cell generation process and switches between different 'modes' of cell generation in order to capture the inherent `burstiness' and 'variability' of typical VBR packet video traffic streams. However, the way in which this cell production process is modelled, as well as the traffic levels which can be reproduced, differs for each of these sources. A relatively complex, general-purpose traffic source is proposed which is based on the hidden Markov statistical model and is able to model all levels of VBR traffic (up to 20 Mbps). A simpler 'switched-fractal' source is also proposed as an accurate model for the low (below 5 Mbps) VBR traffic only. The third proposed source is a 'switched-Markov' model which specifically caters for the high (5 to 20 Mbps) levels of VBR traffic. Among these three artificial traffic sources, the characteristics of all levels of traffic produced by the proposed video and image coding scheme are able to be reproduced accurately.
APA, Harvard, Vancouver, ISO, and other styles
50

Varshneya, Virendra K. "Distributed Coding For Wireless Sensor Networks." Thesis, 2005. https://etd.iisc.ac.in/handle/2005/1409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography