To see the other types of publications on this topic, follow the link: H.263.

Dissertations / Theses on the topic 'H.263'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'H.263.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shaffer, Robert. "Transmission of H.263 video over ATM networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0003/MQ40108.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Chang-Ming. "Outils de codage source-canal conjoint pour la transmission robuste de vidéo : application à H.263+ et H.264." Paris 11, 2004. http://www.theses.fr/2004PA112290.

Full text
Abstract:
Cette these a pour contexte la transmission de robuste de sequences video sur canaux de communication composes d'un lien internet suivi d'un canal radio-mobile. Elle propose deux solutions pour augmenter la robustesse de codeurs video de type h. 263+ ou h. 264 a l'egard d'erreurs de transmission. Le train binaire genere par un codeur video est constitue de trois types d'informations : les en-tetes, les vecteurs mouvement et la texture. Les techniques proposees dans cette these concernent dans les deux derniers types de donnees. La premiere partie a consiste a modifier un codeur de maniere a ne plus avoir a transmettre de vecteur mouvement dans le train binaire. Pour cela, une propriete dans le domaine spectral a ete imposee pour chaque macrobloc de l'image originale. Cette propriete spectrale a ete ensuite mise en Œuvre au decodeur pour estimer le vecteur mouvement associe a chaque macrobloc. La seconde partie, est consacree a un outil de decodage de la texture. Les informations souples fournies par le canal de transmission sont exploitees. La structure imposee au train binaire par le codeur et la mise sous forme de paquets de donnees est egalement mise a profit. Le decodeur obtenu est parfaitement compatible avec le standard existant en permettant des gains de qualite significatifs. Dans ces deux parties, les performances sont encourageantes. En terme de perspectives, il apparait indispensable de combiner les deux techniques afin d'obtenir un codeur video sans transmission de vecteur mouvement pour lequel le decodage de la texture se ferait efficacement<br>This thesis contains a context about the robust transmission of video contents through the communication channels composed of mobile or mixed internet-mobile channels. It proposes two solutions to increase the robustness of video coder of h. 263+ or h. 264. The binary stream generated by a video coder consists of three data types: headers, the motion vectors, and the texture. The techniques proposed in this thesis consist in the last two data types. The first part consisted in modifying a video coder. Then, it is not necessary to transmit motion vector. For that, a property in the spectral domain was imposed for each macroblock of the original image. This spectral property was then realized at the decoder side to estimate the motion vector associated with each macroblock. The second part is devoted to a tool of texture decoding. The soft information provided by the transmission channel is exploited. The structure imposed on the binary stream by the coder and the setting of data packet is also profitable. The obtained decoder is perfectly compatible with the existing standard by allowing significant profits of quality. The performances are encouraging on these two parts. In the perspectives, it appears essential to combine these two techniques in order to obtain a video coder without transmission of motion vector for which the texture decoding would be done effectively
APA, Harvard, Vancouver, ISO, and other styles
3

Leung, Spencer. "Design and implementation of a Java H.263/G.726 decoder." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6402.

Full text
Abstract:
In this thesis, we present a decoder that is capable of performing real-time decoding of video and audio streams on Java-enabled platforms. We decompose the design phase into two stages: the declaration of design objectives and the proposition of a high level design. The design objectives are to build a decoder that is efficient, small in size, and adaptable to different platforms and network environments. The high level design is composed of key algorithms and major building blocks of the decoder. We document some challenges encountered during the implementation phase, some of which required revision of the initial design. We conduct test cases under some representative network environments to demonstrate the adaptability of the decoder. Lastly, we identify the work areas to be extended.
APA, Harvard, Vancouver, ISO, and other styles
4

VALENZUELA, Victor Enrique Vermehren. "Proposta de um esquema de codificação De vídeo a baixas taxas de transmissão Para comunicações móveis celulares." Universidade Federal de Pernambuco, 2006. https://repositorio.ufpe.br/handle/123456789/5491.

Full text
Abstract:
Made available in DSpace on 2014-06-12T17:39:52Z (GMT). No. of bitstreams: 2 arquivo6976_1.pdf: 731311 bytes, checksum: 34779da81247850e7b49789cd490b451 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2006<br>Codificação de vídeo abaixo de 64 kbps é essencial para serviços e aplicações envolvendo videofone e sistemas multimídia. O aumento de interesse e demanda por telefonia móvel, TV interativa e serviços de multimídia tem motivado pesquisas em codificação de vídeo no mundo. Neste trabalho é apresentada uma técnica de cancelamento de erro e recuperação de sincronismo no decodificador H.263 com a finalidade de melhorar o desempenho da decodificação de vídeo a baixas taxas de transmissão
APA, Harvard, Vancouver, ISO, and other styles
5

August, Nathaniel J. "On the Low Power Design of DCT and IDCT for Low Bit Rate Video Codecs." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/32125.

Full text
Abstract:
Wireless video systems have applications in cellular videophones, surveillance systems, and mobile patrols. The design of a wireless video system must consider two important constraints: low bit rate and low power dissipation. The ITU-T H.263 video codec standard is suitable for low bit rate wireless video systems, however it is computationally intensive. Some of the most computationally intensive operations in H.263 are the Discrete Cosine Transform (DCT) and the Inverse Discrete Cosine Transform (IDCT), which perform spatial compression and decompression of the data. In an ASIC implementation of H.263, the high computational complexity of the DCT and IDCT leads to high power dissipation of the blocks. Low power design of the DCT and IDCT is essential in a portable wireless video system. This paper examines low power design techniques for DCT and IDCT circuits applicable for low bit rate wireless video systems. Five low power techniques are applied to baseline reference DCT and IDCT circuits. The techniques include skipping low energy DCT input, skipping all-zero IDCT input, low precision constant multipliers, clock gating, and a low transition data path. Gate-level simulations characterize the effectiveness of each technique. The combination of all techniques reduces average power dissipation by 95% over the baseline reference DCT and IDCT blocks.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Silva, Eduardo Peixoto Fernandes da. "Transcodificador de vídeo wyner-ziv/h.263 para comunicação entre dispositivos móveis." reponame:Repositório Institucional da UnB, 2008. http://repositorio.unb.br/handle/10482/1866.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2008.<br>Submitted by Natália Cristina Ramos dos Santos (nataliaguilera3@hotmail.com) on 2009-09-11T18:30:57Z No. of bitstreams: 1 Dissert_EduardoPeixotoFSilva.pdf: 3041533 bytes, checksum: ba964ff86fd0b5de0da22e0fdb4ee05b (MD5)<br>Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2009-10-05T16:39:49Z (GMT) No. of bitstreams: 1 Dissert_EduardoPeixotoFSilva.pdf: 3041533 bytes, checksum: ba964ff86fd0b5de0da22e0fdb4ee05b (MD5)<br>Made available in DSpace on 2009-10-05T16:39:49Z (GMT). No. of bitstreams: 1 Dissert_EduardoPeixotoFSilva.pdf: 3041533 bytes, checksum: ba964ff86fd0b5de0da22e0fdb4ee05b (MD5) Previous issue date: 2008-02<br>Em comunicações de vídeo entre dispositivosmóveis, tanto o terminal transmissor quanto o terminal receptor podem não ter os recursos computacionais necessários para realizar tarefas complexas de compressão e descompressão de vídeo. Codificadores de vídeo tradicionais apresentam maior complexidade na operação de codificação do que na operação de decodificação. No entanto, a codificação Wyner-Ziv permite que se construa um codificador de vídeo onde a codificação é menos complexa, ao custo de um decodificador mais complexo. Neste trabalho é proposto um sistema de comunicação de vídeo onde o transmissor utiliza um codificador Wyner-Ziv (de complexidade reversa), enquanto o receptor utiliza um decodificador tradicional, de forma que a complexidade seja minimizada em ambos os terminais. Para que este sistema funcione, é necessário inserir um transcodificador na rede para converter a sequência de vídeo. É apresentado um transcodificador eficiente, que recebe uma sequência codificada com um codificadorWyner-Ziv simples e transcodifica para o padrão H.263. A abordagem utilizada diminui a complexidade do sistema ao re-utilizar a estimação de movimento realizada na decodificação Wyner-Ziv, entre outras coisas. Foi implementado um codificador Wyner-Ziv no domínio dos pixels para o desenvolvimento do transcodificador. Além de reutilizar os vetores demovimento calculados na decodificação Wyner-Ziv, o transcodificador também apresenta várias opções, podendo alterar o GOP da sequência transcodificada e refinar os vetores de movimento. Foram realizados testes extensivos para avaliar o transcodificador proposto e seus modos opcionais, utilizando sequências de vídeo populares como Foreman, Salesman, CarPhone e Coastguard. _______________________________________________________________________________________ ABSTRACT<br>In mobile to mobile video communications, both the transmitting and receiving ends may not have the necessary computing power to perform complex video compression and decompression tasks. Traditional video codecs tipycally have highly complex encoders and less complex decoders. However, Wyner-Ziv coding allows for a low complexity encoder at the price of a more complex decoder. It is proposed a video communication system where the transmitter uses a Wyner-Ziv (reverse complexity) encoder, while the receiver uses a traditional decoder, hence minimizing complexity at both ends. For that to work it becomes necessary to insert a transcoder in the network to convert the video stream. It is presented an efficient transcoder from a simple Wyner-Ziv approach to the H.263 standard. This approach saves a large amount of computation by reusing the motion estimation performed at the Wyner-Ziv decoder stage, among other things. A pixel-domain Wyner-Ziv codec was implemented for the transcoder. Along with reusing the motion estimation done in the Wyner-Ziv decoding process, the transcoder also allows one to change the GOP length of the transcoded sequence and to refine the motions vectors. Extensive tests were carried to evaluate the proposed transcoder performance using popular video sequences such as Foreman, Salesman, Carphone and Coastguard.
APA, Harvard, Vancouver, ISO, and other styles
7

Richmond, II Richard Steven. "A Low-Power Design of Motion Estimation Blocks for Low Bit-Rate Wireless Video Communications." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/31458.

Full text
Abstract:
Motion estimation and motion compensation comprise one of the most important compression methods for video communications. We propose a low-power design of a motion estimation block for a low bit-rate video codec standard H.263. Since the motion estimation is computationally intensive to result in large power consumption, a low-power design is essential for portable or mobile systems. Our block employs the Four-Step Search (4SS) method as its primary algorithm. The design and the algorithm have been optimized to provide adequate results for low-quality video at low-power consumption. The model is developed in VHDL and synthesized using a 0.35 um CMOS library. Power consumption of both gate-level circuits and memory-accesses have been considered. Gate-level simulation shows the proposed design offers a 38% power reduction over a "baseline" implementation of a 4SS model and a 60% power reduction over a baseline Three-Step Search (TSS) model. Power savings through reduction of memory access is 26% over the TSS model and 32% over the 4SS model. The total power consumption of the proposed motion estimation block ranges from 7 - 9 mW and is dependent on the type of video being motion estimated.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Yilmaz, Ayhan. "Robust Video Transmission Using Data Hiding." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1093509/index.pdf.

Full text
Abstract:
Video transmission over noisy wireless channels leads to errors on video, which degrades the visual quality notably and makes error concealment an indispensable job. In the literature, there are several error concealment techniques based on estimating the lost parts of the video from the available data. Utilization of data hiding for this problem, which seems to be an alternative of predicting the lost data, provides a reserve information about the video to the receiver while unchanging the transmitted bit-stream syntax<br>hence, improves the reconstruction video quality without significant extra channel utilization. A complete error resilient video transmission codec is proposed, utilizing imperceptible embedded information for combined detecting, resynchronization and reconstruction of the errors and lost data. The data, which is imperceptibly embedded into the video itself at the encoder, is extracted from the video at the decoder side to be utilized in error concealment. A spatial domain error recovery technique, which hides edge orientation information of a block, and a resynchronization technique, which embeds bit length of a block into other blocks are combined, as well as some parity information about the hidden data, to conceal channel errors on intra-coded frames of a video sequence. The errors on inter-coded frames are basically recovered by hiding motion vector information along with a checksum into the next frames. The simulation results show that the proposed approach performs superior to conventional approaches for concealing the errors in binary symmetric channels, especially for higher bit rates and error rates.
APA, Harvard, Vancouver, ISO, and other styles
9

Akdag, Sadik Bahaettin. "An Image Encryption Algorithm Robust To Post-encryption Bitrate Conversion." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607710/index.pdf.

Full text
Abstract:
In this study, a new method is proposed to protect JPEG still images through encryption by employing integer-to-integer transforms and frequency domain scrambling in DCT channels. Different from existing methods in the literature, the encrypted image can be further compressed, i.e. transcoded, after the encryption. The method provides selective encryption/security level with the adjustment of its parameters. The encryption method is tested with various images and compared with the methods in the literature in terms of scrambling performance, bandwidth expansion, key size and security. Furthermore this method is applied to the H.263 video sequences for the encryption of I-frames.
APA, Harvard, Vancouver, ISO, and other styles
10

Ramadoss, Balaji. "Vector Flow Model in Video Estimation and Effects of Network Congestion in Low Bit-Rate Compression Standards." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tamanna, Sina. "Transcoding H.265/HEVC." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2670.

Full text
Abstract:
Video transcoding is the process of converting compressed video signals to adapt video characteristics such as video bit rate, video resolution, or video codec, so as to meet the specifications of communication channels and endpoint devices. A straightforward transcoding solution is to fully decode and encode the video. However this method is computationally expensive and thus unsuitable in applications with tight resource constraints such as in software-based real-time environment. Therefore, efficient transcoding meth- ods are required to reduce the transcoding complexity while preserving video quality. Prior transcoding methods are suitable for video coding standards such as H.264/AVC and MPEG-2. H.265/HEVC has introduced new coding concepts, e.g., the quad-tree-based block structure, that are fundamentally different from those in prior standards. These concepts require existing transcoding methods to be adapted and novel solutions to be developed. This work primarily addressed the issue of efficient HEVC transcoding for bit rate adaptation (reduction). The goal is to understand the transcoding behaviour for some straightforward transcoding strategies, and to subsequently optimize the complexity/quality trade-off by providing heuristics to reduce the number of coding options to evaluate. A transcoder prototype is developed based on the HEVC reference software HM-8.2. The proposed transcoder reduces the transcoding time compared to full decoding and encoding by at least 80% while inducing a coding performance drop within a margin for 5%. The thesis has been carried out in collaboration with Ericsson Research in Stockholm<br>Video content is produced daily through variety of electronic devices, however, storing and transmitting video signals in raw format is impractical due to its excessive resource requirement. Today popular video coding standards such as MPEG-4 and H.264 are used to compress the video signals before storing and transmitting. Accordingly, efficient video coding plays an important role in video communications. While video applications become wide-spread, there is a need for high compression and low complexity video coding algorithms that preserve image quality. Standard organizations ISO, ITO, VCEG of ITU-T, and collaboration of many companies have developed video coding standards in the past to meet video coding requirements of the day. The Advanced Video Coding (AVC/H.264) standard is the most widely used video coding method. AVC is commonly known to be one of the major standards used in Blue Ray devices for video compression. It is also widely used by video streaming services, TV broadcasting, and video conferencing applications. Currently the most important development in this area is the introduction of H.265/HEVC standard which has been finalized in January 2013. The aim of standardization is to produce video compression specification that is capable of compression twice as effective as H.264/AVC standard in terms of coding complexity and quality. There is a wide range of platforms that receive digital video. TVs, personal computers, mobile phones, and tablets each have different computational, display, and connectivity capabilities, thus video has to be converted to meet the specifications of target platform. This conversion is achieved through video transcoding. For transcoding, straightforward solution is to decode the compressed video signal and re-encode it to the target compression format, but this process is computationally complex. Particularly in real-time applications, there is a need to exploit the information that is already available through the compressed video bit-stream to speed-up the conversion. The objective of this thesis is to investigate efficient transcoding methods for HEVC. Using decode/re-encode as the performance reference, methods for advanced transcoding will be investigated.<br>0760609667 Bäckgårdsvägen 49, 14341 Stockholm
APA, Harvard, Vancouver, ISO, and other styles
12

Manoel, Edson Tadeu Monteiro. "Codificação de vídeo H.264." Florianópolis, SC, 2007. http://repositorio.ufsc.br/xmlui/handle/123456789/90522.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia Elétrica.<br>Made available in DSpace on 2012-10-23T11:57:21Z (GMT). No. of bitstreams: 1 241091.pdf: 4985254 bytes, checksum: cedc65854e3b86ebe9d4b3a3c19ea07e (MD5)<br>Esta dissertação trata da codificação e compressão de vídeo digital, particularmente focando o recente padrão de codificação H.264. O interesse em tal padrão tem crescido bastante ultimamente, principalmente para o uso em novos sistemas de armazenamento e de transmissão de vídeo digital. O H.264 é um padrão de vídeo atual que possui muito bom desempenho - taxa de bits 50% menor do que a do seu antecessor MPEG?2, mantendo o mesmo desempenho de qualidade. Nesta dissertação, são abordadas duas extensões (aprimoramentos) ao padrão H.264, visando melhorar ainda mais o desempenho de tal padrão, isto é, aumentar a qualidade do sinal comprimido (mantendo a mesma taxa de bits), ou reduzir a taxa de bits (mantendo a mesma qualidade). As extensões fundamentam?se no fato de que alguns tipos de macroblocos (segmentos do sinal de vídeo) possuem uma pequena região distinta que geralmente tem influência negativa na taxa de bits. Inicialmente, são apresentadas as principais características da codificação de vídeo e do padrão H.264. Em seguida, um detalhamento dos diversos processos que estão relacionados aos aprimoramentos são considerados, principalmente nos processos de codificação de macroblocos, predição e otimização da taxa?distorção Lagrangiana. Para avaliar os novos modos de codificação, o código do modelo de referência (JM) de implementação do padrão H.264 é modificado para incluir tais extensões (tanto na codificação quanto na decodificação). Os resultados obtidos decorrentes das modificações propostas são avaliados através do uso de diversas seqüências?padrão de teste. Tais resultados indicam que os aprimoramentos conseguidos são passíveis de serem incluídos no referido padrão.
APA, Harvard, Vancouver, ISO, and other styles
13

Krivoklatský, Filip. "Návrh vestavaného systému inteligentného vidění na platformě NVIDIA." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-400627.

Full text
Abstract:
This diploma thesis deals with design of embedded computer vision system and transfer of existing computer vision application for 3D object detection from Windows OS to designed embedded system with Linux OS. Thesis focuses on design of communication interface for system control and camera video transfer through local network with video compression. Then, detection algorithm is enhanced by transferring computationally expensive functions to GPU using CUDA technology. Finally, a user application with graphical interface is designed for system control on Windows platform.
APA, Harvard, Vancouver, ISO, and other styles
14

Kriščiūnas, Eugenijus. "Atvirojo kodo vaizdo kodavimo H.264 realizacijų, aprašytų aparatūros aprašymo kalbomis, tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2012~D_20131104_110234-26740.

Full text
Abstract:
H.264/AVC yra pagrįstas tradicinėmis vaizdo kodavimo sąvokomis, bet palyginti su ankstesniais standartais, su svarbiais tam tikrais skirtumais.Reikšmingiausi skirtumai yra padidintas judėjimo apskaičiavimo pajėgumas, mažas bloko dydis su tikslia transformacija, adaptyvus blokų mažinimo filtras ir patobulinti entropijos kodavimo metodai. palygintų su MPEG-2. H.264/AVC pasiekia daugiau kaip 50 procentų, kodavimo prieaugį palyginus su MPEG-2 visuose PSNR situacijose. H.264/AVC standartas dirba žymiai geriau už visus ankstesnius standartus, dėl padidėjusio kodavimo lankstumo ir sudėtingumo.<br>Video coding is the entire process of compressing and decompressing of a digital video signal. Then the mainstream video compression tools developed in the past several years by both Video Coding Experts Group (VCEG) and Moving Picture Experts Group (MPEG) are briefly introduced. Since its invention from early 1990 modern digital video compression techniques have played an important role in the world of telecommunication and multimedia systems where bandwidth is still a valuable commodity. Evolution from the early MPEG-1/H.261 to the current H.264/AVC video codec gradually improves the coding efficiency at the cost of design complexity. Compared with its prior standards the H.264/AVC is able to achieve nearly doubled coding gain, while the encoder's and decoder's complexity increase 5 – 10 and 2 – 3 times respectively.
APA, Harvard, Vancouver, ISO, and other styles
15

Al-Muscati, Hussain. "Scalable transcoding of H.264 video." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=92256.

Full text
Abstract:
Digital video transcoding provides a low complexity mechanism to convert a coded video stream from one compression standard to another. This conversion should be achieved while maintaining a high visual quality. The recent emergence and standardization of the scalable extension of the H.264 standard, together with the large availability of encoded H.264 single-layer content places great importance in developing a transcoding mechanism that converts from the single layer to the scalable form.<br>In this thesis, transcoding of a single layer H.264/AVC stream to H.264/SVC stream with combined spatial-temporal scalability is achieved through the use of a heterogeneous video transcoder in the pixel domain. This architecture is chosen as a compromise between complexity and reconstruction quality.<br>In this transcoder, the input H.264/AVC stream is fully decoded. The macroblock coding modes and partitioning decisions are reused to encode the output H.264/SVC stream. A set of new motion vectors is computed from the input stream coded motion vectors. This extracted and modified information is collectively downsampled, together with the decoded frames, in order to provide multiple scalable layers. The newly computed motion vectors are further subjected to a 3 pixel refinement. The output stream is coded with either a hierarchical B-frame or a zero-delay referencing structure.<br>The performance of the proposed transcoder is validated through simulation results. These simulations compare both the compression efficiency (PSNR/bit-rate) and computational complexity (computation time) of the implemented transcoding scheme to a setup that preforms a full decoding followed by a full encoding of the incoming video stream. It is shown that a significant decrease in computational complexity is achieved with a reduction of over 60% in some cases, while maintaining a small loss in compression efficiency.<br>Le transcodage vid´eo num´erique fournit un m´ecanisme de faible complexit´e pour convertir un flux vid´eo d'un format de compression `a un autre. Cette conversion devrait ˆetre atteinte tout en maintenant une haute qualit´e visuelle. La r´ecente ´emergence et la normalisation de l'extension "scalable" (en couches) de la norme H.264, ainsi que la grande disponibilit´e de contenu cod´e au format H.264 `a couche unique donnent une grande importance au d´eveloppement d'un m´ecanisme de transcodage qui convertit du format `a couche unique `a la forme "scalable" .<br>Dans cette th`ese, le transcodage d'un flux simple couche H.264/AVC vers un flux H.264/SVC combinant des couches spatiales et temporelles est obtenue par l'utilisation d'un transcodeur vid´eo h´et´erog`ene dans le domaine des pixels. Cette architecture est choisie comme un compromis entre la complexit´e et la qualit´e de reconstruction.<br>Dans ce transcodeur, le flux d'entr´ee H.264/AVC est enti`erement d´ecod´e. Le mode de codage et les d´ecisions de partitionnement pour les macro-blocs sont r´eutilis´es pour encoder le flux de sortie H.264/SVC. Un ensemble de nouveaux vecteurs de mouvement est calcul´e `a partir des vecteurs de mouvement du flux d'entr´ee cod´e. Cette information modifi´ee est sous-´echantillonn´ee, en mˆeme temps que les images d´ecod´ees, afin de fournir de multiples couches spatiales. Les vecteurs de mouvement nouvellement calcul´e sont en outre soumis `a un raffinement de 3 pixels. Le flux de sortie est cod´e soit avec soit un syst`eme dimages B hi´erarchique soit avec une structure `a d´elai z´ero.<br>La performance du transcodeur propos´e est valid´ee par les r´esultats de simulation.<br>Ces simulations comparent `a la fois l'efficacit´e de compression (PSNR/d´ebit), et la complexit ´e des calculs (temps de calcul) du syst`eme de transcodage `a un syst`eme qui met en uvre un d´ecodage complet suivi d'un r´e-encodage complet du flux vid´eo entrant. Il est d´emontr´e qu'une diminution significative de la complexit´e algorithmique est atteinte avec une r´eduction de plus de 60% dans certains cas, tout en maintenant une faible perte en efficacit´e de compression.
APA, Harvard, Vancouver, ISO, and other styles
16

Haywood, Richard James. "H.264 Data Partitioned Video Streaming." Thesis, Aston University, 2009. http://publications.aston.ac.uk/15320/.

Full text
Abstract:
Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.
APA, Harvard, Vancouver, ISO, and other styles
17

Erdogan, Baran. "Real-time Video Encoder On Tmsc6000 Platform." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605594/index.pdf.

Full text
Abstract:
Technology is integrated into daily life more than before as it evolves through communication area. In the past, it started with audio devices that help us to communicate while far between two ends of communication line. Nowadays visual communication comes in front considering the communication technology. This became possible with the improvement in the compression techniques of visual data and increasing speed, optimized architecture of the new family processors. These type processors are named as Digital Signal Processors (DSP&rsquo<br>s). Texas Instruments TMS320C6000 Digital Signal Processor family offers one of the fastest DSP core in the market. TMS320C64x sub-family processors are newly developed under the TMS320C6000 family to overcome disadvantages of its predecessor family TMS320C62x. TMS320C64x family has optimized architecture for packed data processing, improved data paths and functional units,improved memory architecture and increased speed. These capabilities make this family of processors good candidate for real-time video processing applications. Advantages of this core are used for implementing newly established H.264 Recommendation. Highly optimizing C Compiler of TMS320C64x enabled fast running implementation of encoder blocks that bring heavy computational load to encoder. Such as fast implementation of Motion Estimation, Transform, Entropy Coding became possible. Simplified Densely Centered Uniform-P Search algorithm is used for fast estimation of motion vectors. Time taking parts enhanced to improve the performance of the encoder.
APA, Harvard, Vancouver, ISO, and other styles
18

Mazataud, Camille. "Error concealment for H.264 video transmission." Thesis, Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/34715.

Full text
Abstract:
Video coding standards such as H.264 AVC (Advanced Video Coding) rely on predictive coding to achieve high compression efficiency. Predictive coding consists of predicting each frame using preceding frames. However, predictive coding incurs a cost when transmitting over unreliable networks: frames are no longer independent and the loss of data in one frame may affect future frames. In this thesis, we study the effectiveness of Flexible Macroblock Ordering (FMO) in mitigating the effect of errors on the decoded video and propose solutions to improve the error concealment on H.264 decoders. After introducing the subject matter, we present the H.264 profiles and briefly determine their intended applications. Then we describe FMO and justify its usefulness for transmission over lossy networks. More precisely, we study the cost in terms of overheads and the improvements it offers in visual quality for damaged video frames. The unavailability of FMO in most H.264 profiles leads us to design a lossless FMO removal scheme, which allows the playback of FMO-encoded video on non FMO-compliant decoders. Then, we describe the process of removing the FMO structure but also underline some limitations that prevent the application of the scheme. Finally, we assess the induced overheads and propose a model to predict these overheads when FMO Type 1 is employed. Eventually, we develop a new error concealment method to enhance video quality without relying on channel feedback. This method is shown to be superior to existing methods, including those from the JM reference software and can be applied to compensate for the limitations of the scheme proposed FMO-removal scheme. After introducing our new method, we evaluate its performance and compare it to some classical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Ying. "Analysis Application for H.264 Video Encoding." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-133633.

Full text
Abstract:
A video analysis application ERANA264 (Ericsson Research h.264 videoANalysis Application) is developed in this project. Erana264 is a tool that analyzes H.264 encoded video bit streams, extracts the encoding information and parameters, analyzes them in different stages and displays the results in a user friendly way. The intention is that such an application would be used during development and testing of video codecs. The work is implemented on top of existing H.264 encoder/decoder source code (C/C++) developed at Ericsson Research. Erana264 consists of three layers. The first layer is the H.264 decoder previously developed in Ericsson Research. By using the decoder APIs, the information is extracted from the bit stream and is sent to the higher layers. The second layer visualizes the different decoding stages, uses overlay to display some macro block and picture level information and provides a set of play back functions. The third layer analyzes and presents the statistics of prominent parameters in video compression process, such as video quality measurements, motion vector distribution, picture bit distribution etc.
APA, Harvard, Vancouver, ISO, and other styles
20

Eklund, Anders. "Image coding with H.264 I-frames." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8920.

Full text
Abstract:
<p>In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence.</p><br><p>I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Chowdhury, Sharmeen 1966. "CCITT recommendation H.261 video codec implementation." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/292041.

Full text
Abstract:
Video communication has advanced significantly over the last decade. Low bit rate video coding and low cost packet switching network access have made video communication practical and cost-effective. CCITT has recommended a compression standard (H.261) with a rate of px 64 Kb/s for p=1 to 30. The key elements of H.261 are: (1) interframe compensation, (2) motion compensation, (3) discrete cosine transform (DCT), (4) quantization, and (5) coding. For interframe compensation, only the difference of two consecutive frames is transmitted. In motion compensation, a spatial displacement vector is derived. DCT is used to convert spatial data into spatial frequency coefficients. All transformed coefficients are quantized with uniform quantizer for which step size is adjusted according to the buffer occupancy. Quantized coefficients are encoded using both fixed and variable length coding. At the decoder, the inverse operation of compression is performed. In this thesis, a detailed description of H.261 and its implementation in software are provided.
APA, Harvard, Vancouver, ISO, and other styles
22

Waheed, Abdul-Mohammed. "Optimization on H.264 De-blocking Filter." Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1235.

Full text
Abstract:
H.264/AVC is the state-of-the-art video coding standard which promises to achieve same video quality at about half the bit rate of previous standards (H.263, MPEG-2). This tremendous achievement in compression and perceptual quality is due to the inclusion of various innovative tools. These tools are highly complex and data intensive as a result poses very heavy computational burden on the processors. De-blocking filter is one among them, it is the most time consuming part of the H.264/AVC reference decoder. In this thesis, a performance analysis of the de-blocking filter is made on Intel Pentium 4 processor and accordingly various optimization techniques have been studied and implemented. For some techniques statistical analysis of video data is done and according to the results obtained optimization is performed and for other techniques SIMD instructions has been used to achieve the optimization. Comparison of optimized techniques using SIMD with the reference software has shown significant speedup thus contributing to the real time implementation of the de-blocking filter on general purpose platform.<br>De-blocking Filter is the most time consuming part of the H.264 High Profile decoder. The process of De-block filtering specified in the H.264/AVC standard is sequential thus not computationally optimal. In this thesis various optimization algorithms have been studied and implemented. When compared to JM13.2 boundary strength algorithm, Static and ICME algorithms are quite primitive as a result no performance gain is achieved, in fact there is a decrease in performance. This dismal performance is due to various reasons, prominent among them are increased memory access, unrolling of loop to 4x4 boundary and early detection of intra blocks. When it comes to the optimization algorithms of Edge filtering module both the algorithms (SIMD and fast algorithm) showed significant improvement in performance when compared to JM13.2 edge filtering algorithm. This improvement is mainly due to the parallel filtering operation done in edge filtering module. Therefore, by using SSE2 instructions large speed up could be achieved on general purpose processors like Intel, while keeping the conformance with the standard.
APA, Harvard, Vancouver, ISO, and other styles
23

ASLAM, UMAIR. "H.264 CODEC Blocks Implementation on FPGA." Thesis, Linköpings universitet, Elektroniksystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112743.

Full text
Abstract:
H.264/AVC (Advance Video Coding) standard developed by ITU-T Video Coding Experts Group(VCEG) and ISO/IEC JTC1 Moving Picture Experts Group (MPEG), is one of the most powerful andcommonly used format for video compression. It is mostly used in internet streaming sources i.e.from media servers to end users. This Master thesis aims at designing a CODEC targeting the Baseline profile on FPGA.Uncompressed raw data is fed into the encoder in units of macroblocks of 16×16 pixels. At thedecoder side the compressed bit stream is taken and the original frame is restored. Emphasis isput on the implementation of CODEC at RTL level and investigate the effect of certain parameterssuch as Quantisation Parameter (QP) on overall compression of the frame rather than investigatingmultiple solutions of a specified block of CODEC.
APA, Harvard, Vancouver, ISO, and other styles
24

Meng, Bojun. "Efficient intra prediction algorithm in H.264 /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202003%20MENG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.<br>Includes bibliographical references (leaves 66-68). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
25

Zheng, Hao. "Analysis of H.264-based Vclan implementation /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1422980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shahid, Muhammad Zafar Javed. "Protection of Scalable Video by Encryption and Watermarking." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20074.

Full text
Abstract:
Le champ du traitement des images et des vidéos attire l'attention depuis les deux dernières décennies. Ce champ couvre maintenant un spectre énorme d'applications comme la TV 3D, la télé-surveillance, la vision par ordinateur, l'imagerie médicale, la compression, la transmission, etc. En ce début de vingt et unième siècle nous sommes témoins d'une révolution importante. Les largeurs de bande des réseaux, les capacités de mémoire et les capacités de calcul ont été fortement augmentés durant cette période. Un client peut avoir un débit de plus de 100~mbps tandis qu'un autre peut utiliser une ligne à 56~kbps. Simultanément, un client peut avoir un poste de travail puissant, tandis que d'autres peuvent avoir juste un téléphone mobile. Au milieu de ces extrêmes, il y a des milliers de clients avec des capacités et des besoins très variables. De plus, les préférences d'un client doivent s'adapter à sa capacité, par exemple un client handicapé par sa largeur de bande peut être plus intéressé par une visualisation en temps réel sans interruption que d'avoir une haute résolution. Pour y faire face, des architectures hiérarchiques de codeurs vidéo ont été introduites afin de comprimer une seule fois, et de décomprimer de différentes manières. Comme la DCT n'a pas la fonctionnalité de multi-résolution, une architecture vidéo hiérarchique est conçue pour faire face aux défis des largeurs de bande et des puissances de traitement hétérogènes. Avec l'inondation des contenus numériques, qui peuvent être facilement copiés et modifiés, le besoin de la protection des contenus vidéo a pris plus d'importance. La protection de vidéos peut être réalisée avec l'aide de trois technologies : le tatouage de méta-données et l'insertion de droits d'auteur, le cryptage pour limiter l'accès aux personnes autorisées et la prise des empreintes digitales active pour le traçage de traître. L'idée principale dans notre travail est de développer des technologies de protection transparentes à l'utilisateur. Cela doit aboutir ainsi à un codeur vidéo modifié qui sera capable de coder et d'avoir un flux de données protégé. Puisque le contenu multimédia hiérarchique a déjà commencé à voir le jour, algorithmes pour la protection indépendante de couches d 'amélioration sont également proposées<br>Field of image and video processing has got lot of attention during the last two decades. This field now covers a vast spectrum of applications like 3D TV, tele-surveillance, computer vision, medical imaging, compression, transmission and much more. Of particular interest is the revolution being witnessed by the first decade of twenty-first century. Network bandwidths, memory capacities and computing efficiencies have got revolutionized during this period. One client may have a 100~mbps connection whereas the other may be using a 56~kbps dial up modem. Simultaneously, one client may have a powerful workstation while others may have just a smart-phone. In between these extremes, there may be thousands of clients with varying capabilities and needs. Moreover, the preferences of a client may adapt to his capacity, e.g. a client handicapped by bandwidth may be more interested in real-time visualization without interruption than in high resolution. To cope with it, scalable architectures of video codecs have been introduced to 'compress once, decompress many ways' paradigm. Since DCT lacks the multi-resolution functionality, a scalable video architecture is designed to cope with challenges of heterogeneous nature of bandwidth and processing power. With the inundation of digital content, which can be easily copied and modified, the need for protection of video content has got attention. Video protection can be materialized with help of three technologies: watermarking for meta data and copyright insertion, encryption to restrict access to authorized persons, and active fingerprinting for traitor tracing. The main idea in our work is to make the protection technology transparent to the user. This would thus result in a modified video codec which will be capable of encoding and playing a protected bitstream. Since scalable multimedia content has already started coming to the market, algorithms for independent protection of enhancement layers are also proposed
APA, Harvard, Vancouver, ISO, and other styles
27

Feki, Oussama. "Contribution à l'implantation optimisée de l'estimateur de mouvement de la norme H.264 sur plates-formes multi composants par extension de la méthode AAA." Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1009/document.

Full text
Abstract:
Les architectures mixtes contenant des composants programmables et d'autres reconfigurables peuvent fournir les performances de calcul nécessaires pour satisfaire les contraintes imposées aux applications temps réel. Mais l'implantation et d'optimisation de ces applications temps réel sur ce type d'architectures est une tâche complexe qui prend un temps énorme. Dans ce contexte, nous proposons un outil de prototypage rapide visant ce type d'architectures. Cet outil se base sur une extension que nous proposons de la méthodologie Adéquation Algorithme Architecture (AAA). Il permet d'effectuer automatiquement le partitionnement et l'ordonnancement optimisés des opérations de l'application sur les composants de l'architecture cible et la génération automatique des codes correspondants. Nous avons utilisé cet outil pour l'implantation de l'estimateur de mouvement de la norme H.264/AVC sur une architecture composée d'un processeur NIOS II d'Altera et d'un FPGA Stratix III. Ainsi nous avons pu vérifier le bon fonctionnement de notre outil et validé notre générateur automatique de code mixte<br>Mixed architectures containing programmable devices and reconfigurable ones can provide calculation performance necessary to meet constraints of real-time applications. But the implementation and optimization of these applications on this kind of architectures is a complex task that takes a lot of time. In this context, we propose a rapid prototyping tool for this type of architectures. This tool is based on our extension of the Adequacy Algorithm Architecture methodology (AAA). It allows to automatically perform optimized partitioning and scheduling of the application operations on the target architecture components and generation of correspondent codes. We used this tool for the implementation of the motion estimator of the H.264/AVC on an architecture composed of a Nios II processor and Altera Stratix III FPGA. So we were able to verify the correct running of our tool and validate our automatic generator of mixed code
APA, Harvard, Vancouver, ISO, and other styles
28

Kota, Praveen. "Rate-adaptive H.264 for TCP/IP networks." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5741.

Full text
Abstract:
While there has always been a tremendous demand for streaming video over TCP/IP networks, the nature of the application still presents some challenging issues. These applications that transmit multimedia data over best-effort networks like the Internet must cope with the changing network behavior; specifically, the source encoder rate should be controlled based on feedback from a channel estimator that probes the network periodically. First, one such Multimedia Streaming TCP-Friendly Protocol (MSTFP) is considered, which iteratively integrates forward estimation of network status with feedback control to closely track the varying network characteristics. Second, a network-adaptive embedded bit stream is generated using a r-domain rate controller. The conceptual elegance of this r-domain framework stems from the fact that the coding bit rate ) (R is approximately linear in the percentage of zeros among the quantized spatial transform coefficients ) ( r , as opposed to the more traditional, complex and highly nonlinear ) ( Q R characterization. Though the r-model has been successfully implemented on a few other video codecs, its application to the emerging video coding standard H.264 is considered. The extensive experimental results show thatrobust rate control, similar or improved Peak Signal to Noise Ratio (PSNR), and a faster implementation.
APA, Harvard, Vancouver, ISO, and other styles
29

Selnes, Stian. "Feedback-based Error Control Methods for H.264." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8802.

Full text
Abstract:
<p>Many network-based multimedia applications transmit real-time media over unreliable networks, i.e. data may be lost or corrupted on its route from sender to receiver. Such errors may cause a severe degradation in perceptual quality. It is important to apply techniques that improve the robustness against errors, in order to ensure that the receiver is able to playback the media with the best attainable quality. Today, most ER schemes for video employ proactive error resilient encoding. These schemes add redundant information into the encoded video stream in order to increase the robustness against potential errors. Because of this, most proactive schemes suffer from a significant reduction of the coding efficiency. Another approach is to adjust the encoder operations based on feedback information from the decoder, e.g. to repair corrupted regions based on reports of lost data. Feedback-based ER schemes normally improves the coding efficiency compared with proactive schemes. Moreover, they adjust rapidly to time-varying network conditions. The objective of this thesis is to develop and evaluate a feedback-based ER scheme conforming to the H.264/AVC standard and applicable for real-time low-delay video applications. The scheme is referred to as FBIR. The performance of FBIR will be compared with an existing proactive ER scheme, known as IPLR. Special attention is given to the applied feedback mechanism, RTP/AVPF. RTP/AVPF is a new (2006) feedback protocol. Basically, it specifies two modifications/additions to the RTCP: First, it modifies the timing algorithm to enable early feedback, while not exceeding the RTCP bandwidth constraint. Second, new RTCP message types are defined, which provides information useful for error control purposes. FBIR employs RTP/AVPF to provide timely feedback of lost packets from the decoder to the encoder. Upon reception of this feedback, the encoder use a fast error tracking algorithm to locate the erroneous regions. Finally, the regions that are assumed to be visually corrupted after decoding are intra refreshed. IPLR is an ER scheme developed for use in a commercial video communication system. It applies a motion-based intra refresh routine. The comparison is carried out by online simulations with various network environments (0, 1, 3 and 5% loss rate; 50 and 200 ms latency), bit rates (64, 144 and 384 kbit/s) and video sequences. First, the video is encoded and transmitted in real-time to the decoder via a network emulator. This emulator generates the desired network characteristics. The receiver decodes the video in real-time and transmits feedback information back to the encoder. The encoder adjusts its encoding process according to this feedback. The H.264/AVC reference software is modified and used as codec. Finally, objective quality measures are obtained by calculating the PSNR of the decoded videos. In addition, some visual inspection is performed. Isolated measures on the RTP/AVPF transmission algorithm are also performed. These show that RTP/AVPF is able to provide timely feedback for error control purposes for a great number of applications and network environments. However, the experienced feedback delay may be increased by numerous factors, e.g. the network latency, the packet loss rate, the session bandwidth, and the number of receivers. This may decrease the performance of ER schemes utilizing RTP/AVPF. RTP/AVPF is fairly easy to implement since it only modifies the RTCP timing algorithm and adds new RTCP message types. RTP/AVPF may be used in combination with other standards in order to extend the available feedback information. Hence, RTP/AVPF enables timely feedback for use in a wide range of multimedia applications. The PSNR measurements show that FBIR always obtains higher objective quality than IPLR for error free transmissions. This does not, however, necessarily affect the perceptual quality if the bit rate is high. FBIR achieves higher PSNR in other situations as well, such as for very low loss rates, low or medium bit rates, and for sequences with high or medium motion activity. Conversely, IPLR performs better for low motion sequences encoded at high bit rates when the loss rate exceeds a certain threshold, typically about 1%. It is also shown that the performance of FBIR may be reduced if the network latency increases. Visually, the main difference between the two schemes is that FBIR recovers all corrupted regions at one instant, while IPLR performs a gradual refresh. The average time before recovery is somewhat shorter for IPLR. The differences between FBIR and IPLR are mainly caused by two factors. First, using FBIR results in less intra coding and thus better coding efficiency. Second, the FBIR scheme does not repair errors until the encoder receives the feedback. Usually, this happens after IPLR has repaired most of the corrupted region. In short, one can say that FBIR provides medium error robustness and high coding efficiency, in contrast to IPLR's high robustness and low coding efficiency. While FBIR's performance may be reduced by network characteristics such as increased latency, IPLR is unaffected by these factors. For error free transmissions, FBIR does not significantly reduce the coding gain compared with a non-robust encoding scheme. Still, it provides a good robustness against corruption in error-prone networks. Thus, all real-time video systems that benefit from immediate feedback should strongly consider to employ FBIR or similar feedback-based ER schemes.</p>
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Gong-Sheng, and 林恭生. "Motion estimator design for H.263." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/91386286187249619160.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電機工程學系<br>85<br>In this thesis, we propose a low hardware cost motion estimator for ITU-T Recomm-comm-endation H.263 standard. The advanced modes of H.263 and the half-pixel precision of motion estimation are considered in this architecture. As the required throughput of the motion estimator for H.263 is not very high, we adopted linear array to implement the low hardware cost, high hardware utilization architecture. The half-pixel precision requirement of H.263, compared with H.261 standard, is included in this architecture. The advanced modes of the standard: Advanced Prediction mode and PB-frame mode are also included in the proposed architecture. This architecture of H.263 performs the motion estimation of the macroblock and 8’8 blocks concurrently and this satisfies the requirement of the AP mode. The PB-frame mode supplies the bit rate reduction scheme in the H.263. We implement the motion estimation of the B-picture with the same PE array architecture. The whole chip includes two main part: IU, which performs the integer-pixel precision motion estimation and HU, which produces the half-pixel precision block matching of the candidate blocks. In the IU, normal FSBM, AP mode and PB- frame mode are performed by the same PE array architecture and the total hardware cost is reduced to minimum. On the other hand, We propose the single PE architecture to perform the half- pixel precision motion estimation. The modified interpolation unit is also proposed to satisfy the PE architecture.
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Hsueh-Yi, and 林學易. "Analysis and Architecture Design for Low Complexity and High Performance H.263 to H.264 Transcoder." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/54213756587963004636.

Full text
Abstract:
碩士<br>國立中央大學<br>電機工程研究所<br>95<br>Transcoding from H.263 to H.264 is a rising topic recently when various applications(such as video phone, online conference) upgrade their systems from H.263 to H.264.Recently, several architectures for low complexity H.263 to H.264 transcoder are provided. However, trading between performance and complexity has not been optimal among them. In this report, existing architectures are reviewed with some numerical evaluation. Our proposed architecture, modi_ed decoding parameter propagation (MDPP), is based on pixel domain transcoding with 4 supported modes. It takes the advantages on high visual performance and no latency is introduced. In the proposed architecture, incoming information is thoroughly reused for minimum computation complexity. Performance is enhanced by two-level search to avoid any mode mismatch while 7 search iterations are removed. Extensive experimental results shows that 99% of complexity is marvelously reduced compared with the cascaded pixel domain transcoder. Besides, 5~8 speedup is achieved by MDPP over the multi-mode architectures. Referring to the video quality, our architecture reaches high performance among the conventional techniques. The proposed architecture is believed to be the optimal choice between performance and complexity.
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Chang-Hong, and 陳長宏. "Video Telephone and H.263 Coding Techniques." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/30484678664586938087.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電機工程學系<br>85<br>In this thesis, a region-based blurring algorithm to reduce the bitrate in very low bitrate video coding is proposed. The algorithm reduces the bitrate by passing the original background image through a filter before motion estimation. In the algorithm, each original images will be segmented into foreground and background at first. After segmentation, the background, which is of less importance for human vision, will be blurred through a blurring filter, while the foreground is kept unchanged. The bitrate for head-and-shoulder type sequence can achieve 5%~20% bitrate saving by the blurring algorithm according to the complexity of the sequence. Consider the human vision model, PSNR degradation in the background is not evident. If we use the blurring algorithm but allocate it with the same bitrate as the original sequence, about 0.5~1dB gain in foreground can be achieved due to bits reallocation. In segmentation part of the algorithm, a region-growing-based segmentation efficient for the special purpose of blurring is explored. The requirement of segmentation for blurring has to be fast enough, but doesn't have to provide find accuracy. This is fulfilled with the region-growing-based segmentation. To further reduce the computation time, a fast search algorithm, such as 3-step search, is applied to the segmented background. Since the background has already been blurred, the local minimums are smoothed out and thus the fast search algorithm will not degrade too much quality any more. Beside the algorithm above, a real videophone system prototype based on the algorithm is constructed for test. The videophone system is built on PC platform, and can achieve 3 more frames per second.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Yen-Lin, and 陳彥霖. "Error Control for H.263 over Wireless Channels." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/22322118470027973131.

Full text
Abstract:
碩士<br>國立交通大學<br>電子工程學系<br>85<br>This thesis presents some methods to combat the error effects whentransmitting H.263 video sequences over a wireless channel. We useerror-resilient entropy coding (EREC) to reduce the error propaga-tion caused by using variable-length codewords in the bitstreams.Besides, corrupted DC coefficients which cannot be detected by checking the bitstream syntax are concealed by a DC recoveryalgorithm. We also propose a method to detect and correct corruptedmotion vectors which uses a motion-vector pairing technique in theencoder and a corresponding motion- vector checking and recoveryalgorithm in the decoder. By these methods, corrupted strips in theimages caused by error propagation from loss of synchronization areavoided. The annoying "green/pink" block artifacts caused byerroneous DC coefficients and the effects of incorrect motioncompensation caused by corrupted motion vectors are also alleviated.
APA, Harvard, Vancouver, ISO, and other styles
34

Hung, Shao-Hua, and 洪紹華. "A Study of H.263+ Video Coding Standard." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/21316353922939415094.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>88<br>H.263+ is the latest finalized video standard of H.26x series, which focuses on low bit rate video applications. At the moment, the existing network architectures in the world are eagerly developing. Many commercial products, such as mobile phones, personal digital assistants and other information appliances, are also widely spread over the world. People desire more and better multimedia information, and H.263+ could be well behaved for video applications under low bit rate constraints. In this thesis, we studied and discussed the H.263+ video coding standard and focused on optional modes of it. We also implemented a software decoder which supporting baseline decoding and seven effective optional modes. Moreover, a demonstration system of H.263+, with it the users can observe the effect of the usage of optional modes, has been implemented. We hope that readers will be attracted by H.263+, and believe that H.263+ can give them more colorful life.
APA, Harvard, Vancouver, ISO, and other styles
35

Woo, Mon-Long, and 吳孟隆. "Real-Time Implementation of H.263+ Using TI TMS320C62xx." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/29625654983696583472.

Full text
Abstract:
碩士<br>國立交通大學<br>電子工程系<br>88<br>With the advancement of the digital signal processing, real-time video transmis- sion will become an essential element in our daily life. In this thesis, we implement a real-time H.263+ codec by using a digital signal processor (DSP). In order to achieve this goal, we need to replace a few slow blocks in the original C programs. Further- more, the C programs are modied to take advantages of the DSP architecture and its C compiler features. We rst give a brief introduction to the ITU-T video compression standard, H.263+. It produces reasonable quality videophone pictures at bit rates around 40kbps. Then, we brie y describe the Texas Instruments digital signal processor, TMS320C62xx, which is used in our implementation. It is a powerful processor with xed-point arithmetic. We start with the simulation software tmn 2.0 provided by Telenor Research as the initial template and then modify it to increase its speed. We use the diamond search, which is included in tmn 3.1.1 (a software encoder oered by University of British Columbia), to replace the original full search scheme. We use a xed-point Decimation-In-Frequency DCT algorithm to replace the oating-point DCT block in tmn 2.0. These two fast algorithms greatly reduce the computation complexity of the entire system. We further rene our codes by taking into account the features of the TMS320C62xx and its C compiler to produce a more ecient program. Overall, we save 95% of the computation load for intra frame coding and 97% for inter frame coding. Our encoder can handle 69 intra frames or 31 inter frames per second for sub-QCIF pictures. The entire system can thus process about 24 frames per second using only one TI processor for both encoding and decoding.
APA, Harvard, Vancouver, ISO, and other styles
36

林垂慶. "Huffman Codec Design Based H.263+ Video Encryption Algorithms." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56514332526578049996.

Full text
Abstract:
碩士<br>國立暨南國際大學<br>資訊工程學系<br>92<br>With the advancement of science and technology, the transmission of digital video data becomes more and more popular. Such as video conference system and online pay-TV which may be grabbed by a hacker. Hence, in order to prevent data piracy and plagiarism, the encryption of multimedia data becomes an extremely important issue. In this paper, we propose an efficient encryption framework which wouldn’t influence seriously the coding speed and compression efficiency of the original codec system. Since, no matter while employing bit-scrambling techniques on the data in the time domain or frequency domain, which would influence the performance of the codec system seriously. Besides, the first method of the compression domain based encryption algorithms proposed in [2] is also less efficient under the situation that there exists a large amount of motion vector codewords in the coded bitstream. Hence, we proposed a lightweight encryption framework through the modification of Huffman tables and it was implemented and verified to be efficient while embedded in H.263+ codec. In order to construct an encryption system which could still own the fast coding speed and better compression efficiency. First, we propose to scramble the fixed-length code (FLC) tables and variable-length code (VLC) tables by using the splay tree algorithm. Next, we use the chaotic algorithm to produce the secret key more promptly and confidentially. Furthermore, different configurations of security can be achieved by easily adapting our system.
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Chin-Iung, and 黃智宏. "Error Detection and Concealment of H.263 Video Sequence." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/15290902913427451739.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學系研究所<br>86<br>H.263是目前壓縮效率相當優異的低位元率視訊壓縮標準,非常適合在低 頻寬公共數據交換網路擔任視訊壓縮重責大任,但由於高倍率壓縮通常是 採用破壞性壓縮演算法,這些經高倍率壓縮編碼過的位元流對資料錯誤會 變得非常敏感,即使一個位元接收錯誤若無適當處理均可能造成後續解碼 過程中產生無法預期的效應,這種錯誤連鎖效應稱為寄宿錯誤,若不設法 排除寄宿錯誤則會造成輸出影像品質嚴重失真,這種現象則稱為錯誤延續 累加,故如何在低位元率傳輸環境要求下做相關錯誤偵測與回復問題,演 變成為相當重要的關鍵 由於視訊服務功能多運用在即時環境,在此環境下必須要求低傳輸延遲時 間,以求影音資料傳輸的平順性和連續性,故我們設計錯誤偵測與回復系 統的策略必須偏重在解碼速度和效率,而略微忽略輸出品質。因此使用傳 統錯誤修正碼編碼技術的錯誤保護方式基於傳輸多餘度的考量將予以捨棄 ,而選擇處理速度快的前向錯誤修補方式。但因前向錯誤修補的效果無法 達到相當理想之要求,所以我們仍須搭配使用錯誤重傳方式以求輸出品質 之控管,只是實做上盡可能將錯誤重傳頻率減少而已。故我們考慮以錯誤 階層性方式來執行錯 誤處理工作,也就是將較嚴重的錯誤用較完整的錯誤回復方法如錯誤重傳 來執行錯誤回復處理,而較不嚴重的錯誤部分就用比較簡單快速方法來解 決甚至不予理會,這種策略稱為不平等資料保護。不平等資料保護除了減 少因過多錯誤重傳所造成傳輸效率降低外,也可依使用環境和執行狀況動 態調整最佳錯誤處理方式。 最後我們實做一個具錯誤偵測與回復能力的H.263解碼器,除了標準的 解碼播放功能外,更針對錯誤回復部分做更多的加強與改善,以期在傳輸 錯誤發生時仍能維持一定的輸出品質,而求能應用在真實生活的傳輸環境 。 H.263 is a very high-efficient low-bitrate video sequence compression sta ndard in current research, it is very suitable for video compression in Public Switched Telephone Network (PSTN). But high- efficient video compression alwa ys take the lossy compression algorithm, these high-compressed bitstream is ve ry sensitive to error corruption, even one bit error may cause an imponderable impacts, these impacts are called residual error, if we do not eliminate thes e residual error it will generate serious image distortion , this is named err or propagation. So how to improve error propagation is become a very important key point in design of error-free transmission system. Owing to most of v ideo sequence services ( Video Phone , Video Conference…..etc.) are restricte d to real-time system, so high-speed error detection and recover strategy is r equired. Since PSTN is high error-prone, we should be very careful with qualit y too, but this is a tradeoff between decoding speed and image quality. We con sider unequal error protection for our main error detection and recover strate gy , that is more severe errors use complete error recover method, like error retransmission or I-Frame refreshing , and less errors use error concealment s kill, like temporal or spatial error concealment skill. In our experiment resu lt we can prove this is a very useful and efficient for error recover, include decoding speed and image quality. We implement a H.263 decoder with error detection and concealment function at last, and we hope this decoder may real ly work at our real transmission environment.
APA, Harvard, Vancouver, ISO, and other styles
38

Liao, Chun-Chieh, and 廖俊傑. "Rate Control and Error Concealment of H.263 Video Codec." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/36852778558047819717.

Full text
Abstract:
碩士<br>國立中央大學<br>電機工程學系<br>85<br>ABSTRACT H.263 is a very low bit-rate video coding standard which has been used with modem and wireless transmissions. Although the video coding output is naturally variable bitrate (VBR), the channel delivering the video stream is often constant bitrate (CBR). Thus, the rate control which converts the VBR video into CBR streams is necessary. We present an adaptive rate control algorithm which achieves CBR by adjusting the quantization stepsize. Experiment results show that the algorithm can effectively maintain CBR. In addition, The error propagation caused by transmission errors may seriously deteriorate the video quality. We present here an efficient error concealment with macro-block interleaving technique for H.263 video. At the transmitter, macro-blocks of I-pictures and P-pictures are interleaved to preserve adequate information for performing efficient error concealment. At the receiver, de- interleaving and effective error concealment techniques are utilized for reducing error damage. Our simulations reveal satisfactory performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Liao, Zun-Jie, and 廖俊傑. "Rate Control and Error Concealment of H.263 Video Codec." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/12474199285445391581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, PeiChi, and 林沛其. "An H.263-Based Multipoint Continuous Presence Video Conferencing System." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/74097676999861654878.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>87<br>Abstract Video Conference system is more and more popular, but current system only support point to point communication. This thesis based on H.263 can accommodate up to six users. The technique that adopts continuous presence instead of switched presence makes sure every user is active. This video combining system combines several video bit stream in DCT domain. In order to implement this system, we must think about frame synchronization, the allocation of the group-of-block(GOB), motion compensation, quantizer setting and accumulation of delay. We remap temporal reference to synchronize frames. It is proposed for the combiner. After allocating the GOB, it is most certainly to recompute motion vectors and set quantizer. Further, we fully utilize the output bandwidth to limit the delay accumulation in a constant value. Finally, this thesis demonstrates when all of the GOBs have no header, we recompute the less motion vector of the macroblocks.
APA, Harvard, Vancouver, ISO, and other styles
41

Dai, Jr-Liang, and 戴至良. "An Implementation of the ITU-T H.263 Coding Systems." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/69160651254997151266.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>87<br>Recently, computer hardware and Internet technologies develop rapidly and their prices get lower and lower; however, their functions and quality get higher and higher. The direct influence of the fact is that some communication products which originally need expensive hardware can be realized by using only PCs and some peripheral devices nowadays. Moreover, their performance such as cost and flexibility can be even better than the pure hardware products. Facing the facts that large video data volume and narrow network bandwidth, at the present time, effective compression of digital video provides an feasible solution such that the products of video conference and video telephone can be realized. In order to provide a common framework for all kinds of communication products, ITU has developed two video coding standards: H.261 and H.263, for achieving low bit rate communication applications. The thesis focuses on the implementation issues of the client of a multipoint video conferencing system based on the H.263 video coding standard and the WinSock technology for network programming.
APA, Harvard, Vancouver, ISO, and other styles
42

Pai, Tung-Hsuan, and 白東玄. "Efficient Algorithm for Robust H.263+ TransmissionUsing Multiple-PBDBS Approach." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/82992427904291072367.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>資訊工程系<br>93<br>Recently, Gao and Tu presented an efficient algorithm for robust H.263+ video transmission by using partial backward decodable bit stream (PBDBS) approach. In this paper, we first present a multiple-PBDBS (MPBDBS) approach to improve the previous PBDBS approach. Next a mathematical theory is provided to minimize the error propagation length in each group of blocks (GOB). Further, a novel MPBDBS-based algorithm is presented for robust video transmission. Experimental results demonstrate that our proposed MPBDBS-based algorithm has a better image quality when compared to the previous PBDBS-based algorithm, but has little bit rate degradation.
APA, Harvard, Vancouver, ISO, and other styles
43

Tu, Ming-Chou, and 涂銘洲. "Error Detection of H.263 Video Data Transmitted in Bluetooth Environment." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/25875976217263431035.

Full text
Abstract:
碩士<br>中原大學<br>電子工程研究所<br>90<br>Nowdays Bluetooth (BT) is one of the major standards for short-range wireless communications. Video transmission techniques with BT modules can be used in PDA, notebooks, and other wireless equipments with displaying devices. The H.263 video compression standard is mainly for the transmission of video data in low-bandwidth channels. It fits the BT transmission rate. In addition, the provided frame sizes are suitable for the equipments described above. In this thesis we discuss the error detection problem for different quality of video transmitted in BT packet formats under the AWGN noise and Rayleigh fading. The video data are compressed first using the H.263 compression technique before they are transmitted. Then they are packetized in BT formats and are modulated before transmitted in a wireless environment. In the decoding process at the receiving end, the video will be decoded through BT and H.263, and the error region will be detected using the proposed error detection method in this thesis. Good error detection results greatly facilitate the subsequent error concealment step and the improvement of video quality. Simulation results show that using error protected/unprotected BT packets and different signal to noise ratios will result in different levels of video degradation. Finally, we use the error detection technique to locate the blocks in error. The results show that the average error detection rate can still maintain more than 85% for the background-complicated Salesman sequence when SNR = 16dB. Furthermore, all the average false detection rates are kept below 15%.
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Jiun-Rung, and 吳俊榮. "DSP-Based Realtime H.263 Encoding/Decoding and Transportation for Videoconferencing." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/03101823387571628270.

Full text
Abstract:
碩士<br>國立交通大學<br>電子工程系<br>88<br>With the significant advances in of desktop CPU and internet technology, realtime multimedia applications such as videoconfe-rencing has become feasible and attract much attention. In this thesis, we propose a collaborative structure to implement a simplified and scalable conferencing system. Desktop PC acts as the central control, and a DSP-embedded card is employed as an external, flexible, efficient and scalable computation power to process realtime video encoding. The mechanism and implemen-tation of cooperation of PC and DSP is discussed. The input of conferencing system is realtime captured video frames, and the encoded frames are sent out to client via an IP network. We discuss the implementation of the input and output of the conferencing system. Moreover, in order to further enhance system efficiency, multithreading design is adopted in the central control block and is responsible for construction of system pipelining. We discuss the design methodology of software pipelining. Experimental results demonstrates its ability to improve the system performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Chi-Feng, Ku, and 辜啟峰. "A Study of ITU-T H.263 And Motion Estimation Algorithms." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/55399666810386445694.

Full text
Abstract:
碩士<br>大同大學<br>通訊工程研究所<br>90<br>Providing speech, data transmission is very popular for telecommunication services. And providing video communication will be the next stage. However, video transmission needs enormous bandwidth, so we need a powerful compression method. For this requirement, ITU-T approved H.263, the video coding standard for low bit rate communication. In this thesis, we first focus on the compression algorithms of H.263 video coding standard. Then, we make an improvement research on the motion estimation. The motion estimation has the highest computation amount, and many fast search algorithms are proposed to reduce it. Although reducing computation amount, they loss video quality. Hence, we propose a hybrid method combining human visual tendency and the full search to improve quality. Simulation results show that our proposed method indeed reduces the computation amount and improves video quality meanwhile.
APA, Harvard, Vancouver, ISO, and other styles
46

Shen, Jun-Fu, and 沈俊甫. "Low Power Full Search Motion Estimation Chip Design for H.263+." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/19949969575990916990.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電機工程學系<br>86<br>In major part of video compression standards, such as H.261, H.263, MPEG1, MPEG2 , most of operation amount is spent in motion estimation. For that reason, it also consumes the largest power of all. Recently, many video compression standards are applied to the wireless portable communication devices that are battery-powered. In order to extend the using time between battery recharging, it is necessary to develop motion estimation architecture with less power consumption. In this thesis, a low power motion estimation chip for H.263+ was proposed. The features of H.263+ such as half-pixel precision and some advanced modes (advance prediction mode, PB-frame mode and reduced resolution update mode ) are taken into consideration. Because of the highest picture quality and the hardware regularity of the full search block matching motion estimation algorithm, the architecture we proposed was based on that algorithm. Unlike the most of motion estimation chips that had been proposed, this architecture can deal with different block size and searching range in a single chip but needs no additional clock cycle latency. On the other hand, to achieve the purpose of low power consumption, this chip has two level of supply voltage, 2.5 volt and 5 volt. We designed the low power register, adder and other logic circuits to work correctly in 2.5 volts and spent less power. A low-power processing element and half pixel generator were designed to prevent spending additional power in some circuit during idle time. This chip was implemented by TSMC 0.6um single-poly triple-metal CMOS technology. The operation frequency is set at 60MHz to meet the requirement of the real time processing in RRU mode. The power consumption is 423.8mW at 60MHz with IMS200 tester. The maximum throughput is 36 frames per second with CIF format at 60MHz.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Shi-Wei, and 陳世偉. "Realtime Implementation of H.263 Video Codec on Digital Signal Processor." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/29519993680730963897.

Full text
Abstract:
碩士<br>國立交通大學<br>電子工程系<br>87<br>The processing of digital video signal is having an increasing importance in our daily life. In this thesis, we use digital signal processor to deal with video signal and to achieve the goal of real-time. First, we give a brief introduction to the video communication standard H.263. All the reported work on video processing is based on this standard. Texas Instruments' TMS320C62xx is a powerful signal processor, and it is strongly marketed by TI. We choose this processor to avail ourselves of its special functionality in signal processing to help realize real-time video coding and decoding. We use Telenor Research's public domain software Tmn 2.0, and modify its architecture to speed up the processing. And we add in some improvement from the Tmn3.1.1, namely, diamond search in motion estimation. This thesis discusses the features of TMS320C62xx and modify a highly efficient program, how to modify the H.263 codec program, and the results of such modification.
APA, Harvard, Vancouver, ISO, and other styles
48

Hsieh, Hung Chih, and 謝宏志. "Error Resilience and Concealment of H.263+ Codec Design for Mobile Channel." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/21844313069873242830.

Full text
Abstract:
碩士<br>國立中正大學<br>電機工程研究所<br>88<br>The development of wire network makes the data transmission more convenient and swift, but can’t satisfy with the portable and mobile machines. Recently, the major transmitted data is converted from normal data into multimedia, and transmitted in the wireless channel as a result of the vast development of wireless communication and the growth of a demand for multimedia. The video data rate is higher than others in multimedia, therefore the compressed technologies such as MPEG and H.26x which transform the raw data into MV parameters and DCT coefficient are needed. However as the video data are highly compressed, the effect of error propagation becomes more serious. Wireless channel is generally featured of high bit error rate and hence incurs the development of error resilient video coding methods. In this thesis, we propose a scheme based on the permutation of MBs in a GOB for both the I- and P-frames. “Importance” of each MB is defined based on its motion property. Important MBs will be encoded first in a GOB so that they will be less probably influenced when random or fading channel errors come about. Besides, we combine in this thesis the data embedding concept in encoder and error concealment in decoder so that MBs with larger MVs can be recovered more accurately. A fading channel environment with different levels of FEC protection is also simulated to evaluate our performance. Experiments show that our coding method actually presents higher PSNR performance than classical ones. The first one method make the average PSNR promote 0.4~0.5 dB for the P-frames of the video sequence (foreman and carphone) in a premise that the previous frame is error free. If we ponder on the effect of error propagation, the result that the transcending PSNR is higher than 5dB is more perfect. Using the data embedding, some reconstructed regions become are more close to the original ones. Therefore the two methods we purpose are effective in error resilience and concealment.
APA, Harvard, Vancouver, ISO, and other styles
49

Tu, Tzong-Shiann, and 涂宗憲. "A Study on Error-Resilience Techniques for Wireless Transmission of H.263 Video." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/22125580275102229126.

Full text
Abstract:
碩士<br>國立交通大學<br>電子工程系<br>88<br>In this thesis, we discuss the error effects when transmitting video sequences over a wireless channel. On the other hand, we review some methods proposed in recent years to combat the errors. Besides, we also implement some error concealment techniques by considering them as two levels, namely, the multiplexing level and the source codec level. At the multiplexing level, first, we discuss the error effects that result from the variable length coding. By analysing the error sensitivity of the H.263 syntax arrangement, we use a new transmitting order to alleviate the effects of synchrnoization loss. Then we consider a technique called slotted multiplexing (SM) which places the variable length blocks into fixed length slots to shorten the duration before regaining the synchronization codeword. At the source codec level, we do the motion vector paring in the encodr and a corresponding motion vector checking and recovery in the decoder. By these methods, corrupted strips in the images caused by error propagation from loss of synchronization are avoided. Finally, we do some postprocessing in the decoder. By using the maximally smooth property, which is the natural property of most images, we find out the erroneous blocks. Then we conceal the erroneous blocks by minimizing the variation between them and the adjacent blocks so that the repaired images are as smooth as possible. The effectiveness of these techniques we implemented are examined by computer simulations. Some of them look well, while some are not as good as we expected. The reasons are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

Shih, Kuei-tsung, and 施圭聰. "Real-time Implementation of Channel Coding and H.263+ Coder Using TI TMS320C62xx." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/56529725206256081196.

Full text
Abstract:
碩士<br>國立交通大學<br>電子工程系<br>89<br>Over noisy wireless channel, channel coding plays a essential role for protecting the transmitted data. In this thesis, we evaluate the performance of channel coding in 3GPP and complete a real-time implementation on a digital signal processor (DSP). Furthermore, we revise our previous implementation of H.263+ video coder for the purpose of improving the quality of encoded pictures. Both implementations use the TI TMS320C62xx, a powerful processor with fixed-point arithmetic. We briefly introduce two channel coding schemes in 3GPP, convolutional coding and turbo coding. We adopt the former in our implementation because it is less complicated. To evaluate the ability of convolutional codes, we use two channel error models. They are Additive White Gaussian Noise (AWGN) channel and Rayleigh fading channel (Gilbert model). For decoding the convolutional codes, we use truncated Viterbi algorithm. We design a function flow at first and then verify the functionalities by building an ANSI C program. Finally, we refine our codes by taking advantages of the DSP architecture and computation power to produce an efficient program. Overall, our convolutional encoder can handle more than 1 Mbps, while our Viterbi decoder can handle only 9.57 Kbps. We also give a short introduction to H.263+. In our previous implementation, we use the diamond search and a fixed-point DIF DCT to reduce the computation complexity of the entire codec. Rounding errors due to the DCT reduce the quality of encoded pictures. We try to use the fixed-point DCT defined in H.623+ Annex W for an alternative. We use the methodology supported in H.263+ Annex A to compare accuracy of these two DCT algorithms. Though adopting the DCT in Annex W provides better quality of encoded pictures, the video coder suffers speed degradation. The entire system using one DSP for both encoding and decoding processes only 9 frames per second, instead of 15 frames per second originally.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!