To see the other types of publications on this topic, follow the link: MPEG video codecs.

Dissertations / Theses on the topic 'MPEG video codecs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'MPEG video codecs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ejembi, Oche Omobamibo. "Enabling energy-awareness for internet video." Thesis, University of St Andrews, 2016. http://hdl.handle.net/10023/9768.

Full text
Abstract:
Continuous improvements to the state of the art have made it easier to create, send and receive vast quantities of video over the Internet. Catalysed by these developments, video is now the largest, and fastest growing type of traffic on modern IP networks. In 2015, video was responsible for 70% of all traffic on the Internet, with an compound annual growth rate of 27%. On the other hand, concerns about the growing energy consumption of ICT in general, continue to rise. It is not surprising that there is a significant energy cost associated with these extensive video usage patterns. In this thesis, I examine the energy consumption of typical video configurations during decoding (playback) and encoding through empirical measurements on an experimental test-bed. I then make extrapolations to a global scale to show the opportunity for significant energy savings, achievable by simple modifications to these video configurations. Based on insights gained from these measurements, I propose a novel, energy-aware Quality of Experience (QoE) metric for digital video - the Energy - Video Quality Index (EnVI). Then, I present and evaluate vEQ-benchmark, a benchmarking and measurement tool for the purpose of generating EnVI scores. The tool enables fine-grained resource-usage analyses on video playback systems, and facilitates the creation of statistical models of power usage for these systems. I propose GreenDASH, an energy-aware extension of the existing Dynamic Adaptive Streaming over HTTP standard (DASH). GreenDASH incorporates relevant energy-usage and video quality information into the existing standard. It could enable dynamic, energy-aware adaptation for video in response to energy-usage and user ‘green' preferences. I also evaluate the subjective perception of such energy-aware, adaptive video streaming by means of a user study featuring 36 participants. I examine how video may be adapted to save energy without a significant impact on the Quality of Experience of these users. In summary, this thesis highlights the significant opportunities for energy savings if Internet users gain an awareness about their energy usage, and presents a technical discussion how this can be achieved by straightforward extensions to the current state of the art.
APA, Harvard, Vancouver, ISO, and other styles
2

AMEEN, HASHIM FARHAN, Eid Jamal Al, and Abdulkhaliq Al-Salem. "Comparing of Real-Time Properties in Networks Based On IPv6 and IPv4." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-21535.

Full text
Abstract:
Real time applications over IP network became widely used in different fields; social video conference, online educational lectures, industrial, military, and online robotic medical surgery. Online medical surgery over IP network has experienced rapid growth in the last few years primarily due to advances in technology (e.g., increased bandwidth; new cameras, monitors, and coder/decoders (CODECs)) and changes in the medical care environment (e.g., increased outpatient care, remote surgeries). The purpose of this study was to examine and analyze the impact of IP networks parameters; delay, jitter, throughput, and drop packet on the performance of real-time medical surgery videos sent across different IP networks; native IPv6, native IPv4, 6to4 and 6in4 tunneling transition mechanisms and compare the behavior of video packets over IP networks. The impact of each parameter over IP networks is examined by using different video codecs MPEG-1, MPEG-2, and MPEG-4. This study has been carried out with two main parts; theoretical and practical part, the theoretical part of this study focused on the calculations of various delays in IP networks such as transmission, processing, propagation, and queuing delays for video packet, while the practical part includes; examining of video codecs throughput over IP networks by using jperf tool and examining delay, jitter, and packet drops for different packet sizes by using IDT-G tool and how these parameters can affect quality of received video. The obtained theoretical and practical results were presented in different tables and plotted into different graphs to show the performance of real time video over IP networks. These results confirmed that video codecs MPEG-1, MPEG-2, and MPEG-4 were highly impacted by encapsulation and de-capsulation process except MPEG-4 codec, MPEG-4 was the least impacted by IPv4, IPv6, and IP transition mechanisms concerning throughput and wastage bandwidth. It also indicated that using IPv6-to-4 and IPv6-in-4 tunneling mechanisms caused more bandwidth wastage, high delay, jitter, and packet drop than IPv4 and IPv6.
APA, Harvard, Vancouver, ISO, and other styles
3

Su, Yeping. "Advanced techniques for video codec optimization /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/5933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Robie, David Lee. "Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7101.

Full text
Abstract:
Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video David L. Robie 133 Pages Directed by Dr. Russell M. Mersereau The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.
APA, Harvard, Vancouver, ISO, and other styles
5

Saw, Yoo-Sok. "Nonlinear rate control techniques for constant bit rate MPEG video coders." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/1381.

Full text
Abstract:
Digital visual communication has been increasingly adopted as an efficient new medium in a variety of different fields; multi-media computers, digital televisions, telecommunications, etc. Exchange of visual information between remote sites requires that digital video is encoded by compressing the amount of data and transmitting it through specified network connections. The compression and transmission of digital video is an amalgamation of statistical data coding processes, which aims at efficient exchange of visual information without technical barriers due to different standards, services, media, etc. It is associated with a series of different disciplines of digital signal processing, each of which can be applied independently. It includes a few different technical principles; distortionrate theory, prediction techniques and control theory. The MPEG (Moving Picture Experts Group) video compression standard is based on this paradigm, thus, it contains a variety of different coding parameters which may result in different performance depending on their values. It specifies the bit stream syntax and the decoding process as its normative parts. The encoder details remain nonnormative and are configured by a specific design. This means that the MPEG video encoder has a great deal of flexibility in the aspects of performance and implementation. This thesis deals with control techniques for the data rate of compressed video, which determine the encoding efficiency and video quality. The video rate control is achieved by adjusting quantisation step size depending on the occupancy of a transmission buffer memory which stores the compressed video data for a specific period of time. Conventional video rate control techniques have generally been based either on linear predictive or on control theoretic models. However, this thesis takes a different view on digital video and MPEG video coding, and focuses on the non-stationary and nonlinear nature of realistic moving pictures. Furthermore, considering the MPEG encoding structure involved in the different disciplines, A series of improvements for video rate control are proposed, each of which enhances the performance of the MPEG encoder. A nonlinear quantisation control technique is investigated, which controls the buffer occupancy with the quantiser using a series of nonlinear functions. Linear and nonlinear feed-forward networks are also employed to control the quantiser. The linear combiner is used as a linear estimator and a radial basis function network as a nonlinear one. Finally, fuzzy rulebased control is applied to exploit the advantages of the nonlinear control technique which is able to provide linguistic judgement in the control mechanism. All these techniques are employed according to two global approaches (feedforward and feedback) applied to the rate control. The performance evaluation is carried out in terms of controllability over bit rate variation and video quality, by conducting a series of simulations.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Yannan. "Artifact reduction for AVS and H.264 coded videos /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20WUY.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tan, Kwee Teck. "Objective picture quality measurement for MPEG-2 coded video." Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Do, Viet Ha. "Réducteurs de bruit adaptatifs spatiaux et post-traitement pour codec MPEG-2." Sherbrooke : Université de Sherbrooke, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Hsueh-szu, and Benjamin Kupferschmidt. "Time Stamp Synchronization in Video Systems." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
APA, Harvard, Vancouver, ISO, and other styles
10

Kieu, Cong Toai. "Prétraitement et post-traitement pour le codec MPEG1." Sherbrooke : Université de Sherbrooke, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mazhar, Ahmad Abdel Jabbar Ahmad. "Efficient compression of synthetic video." Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/9019.

Full text
Abstract:
Streaming of on-line gaming video is a challenging problem because of the enormous amounts of video data that need to be sent during game playing, especially within the limitations of uplink capabilities. The encoding complexity is also a challenge because of the time delay while on-line gamers are communicating. The main goal of this research study is to propose an enhanced on-line game video streaming system. First, the most common video coding techniques have been evaluated. The evaluation study considers objective and subjective metrics. Three widespread video coding techniques are selected and evaluated in the study; H.264, MPEG-4 Visual and VP- 8. Diverse types of video sequences were used with different frame rates and resolutions. The effects of changing frame rate and resolution on compression efficiency and viewers' satisfaction are also presented. Results showed that the compression process and perceptual satisfaction are severely affected by the nature of the compressed sequence. As a result, H.264 showed higher compression efficiency for synthetic sequences and outperformed other codecs in the subjective evaluation tests. Second, a fast inter prediction technique to speed up the encoding process of H.264 has been devised. The on-line game streaming service is a real time application, thus, compression complexity significantly affects the whole process of on-line streaming. H.264 has been recommended for synthetic video coding by our results gained in codecs comparative studies. However, it still suffers from high encoding complexity; thus a low complexity coding algorithm is presented as fast inter coding model with reference management technique. The proposed algorithm was compared to a state of the art method, the results showing better achievement in time and bit rate reduction with negligible loss of fidelity. Third, recommendations on tradeoff between frame rates and resolution within given uplink capabilities are provided for H.264 video coding. The recommended tradeoffs are offered as a result of extensive experiments using Double Stimulus Impairment Scale (DSIS) subjective evaluation metric. Experiments showed that viewers' satisfaction is profoundly affected by varying frame rates and resolutions. In addition, increasing frame rate or frame resolution does not always guarantee improved increments of perceptual quality. As a result, tradeoffs are recommended to compromise between frame rate and resolution within a given bit rate to guarantee the highest user satisfaction. For system completeness and to facilitate the implementation of the proposed techniques, an efficient game video streaming management system is proposed. Compared to existing on-line live video service systems for games, the proposed system provides improved coding efficiency, complexity reduction and better user satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
12

Gupta, Deepanker. "Multi-step-ahead prediction of MPEG-coded video source traffic using empirical modeling techniques." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3201.

Full text
Abstract:
In the near future, multimedia will form the majority of Internet traffic and the most popular standard used to transport and view video is MPEG. The MPEG media content data is in the form of a time-series representing frame/VOP sizes. This time-series is extremely noisy and analysis shows that it has very long-range time dependency making it even harder to predict than any typical time-series. This work is an effort to develop multi-step-ahead predictors for the moving averages of frame/VOP sizes in MPEG-coded video streams. In this work, both linear and non-linear system identification tools are used to solve the prediction problem, and their performance is compared. Linear modeling is done using Auto-Regressive Exogenous (ARX) models and for non linear modeling, Artificial Neural Networks (ANN) are employed. The different ANN architectures used in this work are Feed-forward Multi-Layer Perceptron (FMLP) and Recurrent Multi-Layer Perceptron (RMLP). Recent researches by Adas (October 1998), Yoo (March 2002) and Bhattacharya et al. (August 2003) have shown that the multi-step-ahead prediction of individual frames is very inaccurate. Therefore, for this work, we predict the moving average of the frame/VOP sizes instead of individual frame/VOPs. Several multi-step-ahead predictors are developed using the aforementioned linear and non-linear tools for two/four/six/ten-step-ahead predictions of the moving average of the frame/VOP size time-series of MPEG coded video source traffic. The capability to predict future frame/VOP sizes and hence the bit rates will enable more effective bandwidth allocation mechanism, assisting in the development of advanced source control schemes needed to control multimedia traffic over wide area networks, such as the Internet.
APA, Harvard, Vancouver, ISO, and other styles
13

Hentati, Manel. "Reconfiguration dynamique partielle de décodeurs vidéo sur plateformes FPGA par une approche méthodologique RVC (Reconfigurable Video Coding)." Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0027.

Full text
Abstract:
Les travaux présentés dans cette thèse s'inscrivent dans le cadre de la conception et l'implémentation des décodeurs reconfigurables en utilisant la norme MPEG-RVC. Cette norme est développée par MPEG. Elle permet une grande flexibilité et la réutilisation des normes existantes dans un processus de reconfiguration des solutions de décodage. RVC fournit une nouvelle spécification basée sur une modélisation à flux de données nommée RVC-CAL. Dans ce travail, nous proposons une méthodologie de prototypage rapide permettant une implémentation efficace et optimisée des décodeurs reconfigurables RVC sur des cibles matérielles. Notre flot de conception est basé sur l'utilisation de la reconfiguration dynamique partielle (RDP) afin de valider les approches de reconfiguration permises par la norme MPEG-RVC. En exploitant la technique RDP, le module matériel peut être remplacé par un autre module qui a la même fonction ou le même algorithme mais une architecture différente. Ce concept permet au concepteur de configurer différents décodeurs selon les données d'entrées ou ses exigences (temps de latence, la vitesse, la consommation de la puissance). La RDP peut être aussi utilisée pour réaliser une implémentation hiérarchique des applications RVC. L'utilisation de la norme MPEG-RVC et la RDP permet d'améliorer le processus de développement ainsi que les performances du décodeur. Cependant, la RDP pose plusieurs problèmes tels que le placement des tâches et la fragmentation du FPGA. Ces problèmes ont une influence sur les performances de l'application. Pour remédier à ces problèmes, nous avons proposé une approche de placement hors ligne qui est basée sur l'utilisation d'une méthode d'optimisation, appelée la programmation linéaire. L'application de cette approche sur différentes combinaisons de données ainsi que la comparaison avec une autre méthode ont montré l'efficacité et les performances de l'approche proposée
The main purpose of this PhD is to contribute to the design and the implementation of a reconfigurable decoder using MPEGRVC standard. The standard MPEG-RVC is developed by MPEG. Lt aims at providing a unified high-level specification of current and future MPEG video coding technologies by using dataflow model named RVC-CAL. This standard offers the means to overcome the lack of interpretability between many video codecs deployed in the market. Ln this work, we propose a rapid prototyping methodology to provide an efficient and optimized implementation of RVC decoders in target hardware. Our design flow is based on using the dynamic partial reconfiguration (DPR) to validate reconfiguration approaches allowed by the MPEG-RVC. By using DPR technique, hardware module can be replaced by another one which has the same function or the same algorithm but a different architecture. This concept allows to the designer to configure various decoders according to the data inputs or her requirements (latency, speed, power consumption,. . ). The use of the MPEG-RVC and the DPR improves the development process and the decoder performance. But, DPR poses several problems such as the placement of tasks and the fragmentation of the FPGA area. These problems have an influence on the application performance. Therefore, we need to define methods for placement of hardware tasks on the FPGA. Ln this work, we propose an off-line placement approach which is based on using linear programming strategy to find the optimal placement of hardware tasks and to minimize the resource utilization. Application of different data combinations and a comparison with sate-of-the art method show the high performance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
14

Halldén, Max. "Statistical Multiplexing of Video for Fixed Bandwidth Distribution : A multi-codec implementation and evaluation using a high-level media processing library." Thesis, Linköpings universitet, Informationskodning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150023.

Full text
Abstract:
When distributing multiple TV programs on a fixed bandwidth channel, the bit rate of each video stream is often constant. Since video sent at a constant quality is typically wildly varying, this is a very unoptimal solution. By instead sharing the total bit rate among all programs, the video quality can be increased by allocating bit rate where it is needed. This thesis explores the statistical multiplexing problem for a specific hardware platform with the limitations and advantages of that platform. A solution for statistical multiplexing is proposed and evaluated using the major codecs used for TV distribution today. The main advantage of the statistical multiplexer is a lot more even quality and a higher minimum quality achieved across all streams. While the solution will need a faster method for bit rate approximation for a more practical solution in terms of performance, the solution is shown to work as intended.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Jian Electrical Engineering Australian Defence Force Academy UNSW. "Error resilience for video coding services over packet-based networks." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Electrical Engineering, 1999. http://handle.unsw.edu.au/1959.4/38652.

Full text
Abstract:
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
APA, Harvard, Vancouver, ISO, and other styles
16

Pitrey, Yohann. "Stratégies d'encodage pour codeur vidéo scalable." Phd thesis, INSA de Rennes, 2009. http://tel.archives-ouvertes.fr/tel-00461631.

Full text
Abstract:
Les travaux de cette thèse ont pour but de développer des stratégies de régulation de débit pour le codage vidéo scalable MPEG-4 SVC. Plusieurs approches sont proposées, en fonction des besoins en termes de précision et de la complexité désirée. La qualité du flux vidéo décodé est également prise en compte afin d'améliorer l'impression visuelle. La multiplication des moyens de transmission et la diversification des appareils capables de lire du contenu vidéo contraignent les diffuseurs à dépenser beaucoup de temps et de moyens pour être en mesure de fournir une qualité de vidéo optimale quel que soit le contexte. La vidéo scalable a été développée en réponse à ce besoin d'adaptation des contenus vidéo aux différents contextes de diffusion. Le standard H.264/MPEG-4 SVC (Scalable Video Coding) propose trois types de scalabilité (spatiale, temporelle et en qualité), qui permettent d'adapter la résolution, le nombre d'images par seconde et la qualité du flux en fonction des besoins. Un flux vidéo unique est encodé, contentant plusieurs couches de résolution différentes codées les unes par rapport aux autres, de façon à rendre le codage de l'ensemble plus efficace. La régulation de débit permet d'adapter le débit en sortie de l'encodeur pour respecter des contraintes liées à la transmission ou au décodage du flux vidéo. À partir d'une consigne donnée, le budget à respecter est réparti entre les différents éléments du flux. Un modèle de débit est ensuite utilisé pour anticiper le comportement du débit en fonction des paramètres d'encodage afin de respecter les contraintes imposées au flux. À partir de cette problématique, deux approches sont proposées. La première se base sur un pré-encodage de chaque image pour fournir une base de calcul au modèle de débit. Elle permet d'obtenir une régulation très précise, avec une erreur entre le budget alloué et le débit effectif inférieure à 7\% sur les trois types de scalabilité. La seconde approche utilise les informations rassemblées dans les images précédentes comme base pour le modèle de débit de l'image à encoder. Elle ne requiert pas de pré-encodage, et ne pénalise pas la complexité du processus d'encodage. En outre, la perte de performances par rapport à l'approche en deux passes est minime et la consigne de débit par seconde est respectée de manière précise. Enfin, une méthode permettant de réduire les variations de la qualité est proposée pour améliorer l'impression visuelle ressentie par l'utilisateur. Les résultats montrent que la méthode présentée est capable de réguler le débit avec une grande précision sur les trois types de scalabilité, tout en réduisant les variations de la qualité et en conservant une complexité de calculs très faible. Ces atouts la rendent non seulement intéressante du point de vue des performances, mais également applicable dans des contextes pratiques où les ressources en temps sont limitées.
APA, Harvard, Vancouver, ISO, and other styles
17

Zwingelstein, Marie. "Etude de l'optimisation d'un système DMT-ADSL : application à la transmission video MPEG-2 en mode hiérarchique." Valenciennes, 1999. https://ged.uphf.fr/nuxeo/site/esupversions/4b8844ca-e7ac-4cb4-81df-9d40eda5bd20.

Full text
Abstract:
Le travail présenté dans cette thèse se rapporte au système de transmission numérique ADSL (asymétrique digital suscrite line), dont le principe est d'exploiter les lignes téléphoniques d'abonnés existantes en paire torsadée cuivrée pour assurer une transmission de données à un débit de plusieurs mégabits par seconde. Un des éléments clés de l’ADSL est d'utiliser la modulation multi porteuse DMT (discrète multizone) qui permet d'adapter facilement le signal émis au canal de transmission par le biais du choix des répartitions fréquentielles du débit et de la puissance. Après avoir rappelé comment ces répartitions influencent le TEB (taux d'erreur par bit) atteint, nous développons dans une première partie deux méthodes originales qui optimisent ces répartitions, avec des compromis performance/complexité différents. Pour les deux méthodes, nous proposons un calcul simple de la répartition fréquentielle du débit, optimal au sens de la capacité du canal. En ce qui concerne la puissance, la première méthode, optimale, détermine la répartition qui minimise le TEB, et la seconde, plus simple, celle qui satisfait à l'hypothèse habituelle et légèrement sous-optimale d'un TES (taux d'erreur par symbole) identique sur toutes les fréquences. Les résultats de simulation sur un ensemble de lignes CSA caractéristiques ont montré que les deux méthodes proposées aboutissent à des performances plus élevées que les méthodes courantes, en particulier les méthodes de Hughes-Hartogs et de Peter Chow, avec une complexité de calcul généralement inférieure. Dans la deuxième partie du travail, nous nous intéressons à l'application d'une transmission de données vidéo, codées suivant la norme internationale MPEG-2, sur ADSL. L'originalité repose sur le choix d'un mode de codage et de transmission bi-résolution qui privilégie les données vidéo les plus importantes au détriment des autres données, dans le but d'améliorer la qualité de service. Dans ce cadre, nous avons proposé trois architectures ADSL bi-résolution, agissant soit au niveau de la modulation DMT, soit au niveau du codage FEC (foirard errer correction), en attribuant des codes Reed-Solomon différents aux données importantes et moins importantes. Les résultats ont démontré la validité de tels schémas, avec, compte tenu d'une proportion voisine de 30% pour les données importantes, une réduction du TEB des données importantes d'un facteur 100 par rapport au TEB mono-résolution, alors que le teb des données moins importantes n'est augmente que d'environ un facteur 2
The work presented in this thesis is related to the digital transmission system ADSL (Asymmetric Digital Subscriber Line) which uses existing subscriber lines in copper twisted pairs to transmit data at several megabits per second. One key element of ADSL to use DMT (Discrete MultiTone) modulation to adapt the transmitted signal to the channel thanks to the choice of frequency bit and power loading. In a first part, a brief review of bit and power loading influence on the BER (Bit Error Rate) leads us to propose two original loading methods of different compromise performance / complexity. For both methods, the bit loading is simply calculated to match with the optimum of channel capacity. Regarding power, the first method (optimal) allocates it to minimize the BER, and the second one (more simple) so that the conventional and slightly sub-optimal equal SER (Symbol Error Rate) assumption is satisfied. Simulation results on characteristic ADSL loops have shown that both proposed methods perform better than the conventional ones, in particular the Hughes-Hartogs’ and the Peter Chow’s. The second part of the work is dedicated to MPEG-2 video transmission on ADSL. The originality is to use a bi-resolution modulation and transmission scheme which provides a high degree of protection to the most important data to the prejudice of less important data, so that the QoS (Quality of Service) is improved. In this context, we have presented three different architectures for a bi-resolution ADSL system. They act either at the DMT modulation level, or at the FEC (Forward Error Correction) level by differentially Reed-Solomon encoding important and less important data. The results have shown the validity of these kinds of architectures. For a typical ratio of most important data equal to 30%, the BER of most important data can be divided by 100 in comparison with the mono-resolution BER, whereas the BER for less important data is only multiplied by 2
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, Toufik. "Adaptative packet video streaming over IP networks : a cross layer approach." Versailles-St Quentin en Yvelines, 2003. http://www.theses.fr/2003VERS0042.

Full text
Abstract:
Nous constatons aujourdh'hui une forte demande de services vidéo sur les réseaux IP. Cependant, plusieurs caractéristiques de ces réseaux font que le déploiement à grande échelle de tels services présente un réel challenge par rapport à des applications telles que l'émail et le Web. Ces applications audiovisuelles doivent faire face aux différentes variations de la bande passante et du délai de transfert, tout en restant robuste aux pertes de paquets et aux erreurs de transmission. Cette thèse examine donc ces différents problèmes et présente un système de diffusion vidéo adaptatif et intégré ("cross-layer"), qui résout certains problèmes induits par le transport de la vidéo en mode paquet sur les réseaux IP avec garantie statistique de la qualité de service (i. E. IP Diffser). Les flux vidéo sont généralement compressés selon la norme MPEG-4 qui permet d'assurer une utilisation optimale de la bande passante ainsi qu'un meilleur degré d'intéractivité basé sur la description structurelle des flux. L'originalité de MPEG-4 est de fournir une représentation objet du contenu multimédia qui peut être naturel ou de synthèse afin de le transporter sur une large variété d'infrastructures de communication. L'originalité de notre contribution est de proposer un système adaptatif de diffusion vidéo respectant l'approche intégrée ou également appelée "cross-layer". En effet, tandis que la plupart des travaux de recherche dans ce domaine respectent le paradigme d'isolation et d'indépendance des couches protocolaires hérité du modèle de référence ISO, notre approche conceptuelle supprime cette limite en autorisant une meilleure prise en charge de la sémantique
While there is an increasing demand for streaming video applications on IP networks, various network characteristics make the deployment of these applications more challenging than traditional internet applications like email and web. These applications that transmit audiovisual data over IP must cope with the time varying bandwidth and delay of the network and must be resilient to packet loss and error. This dissertation thesis examines these challenges and presents a cross layer video streamin over large scale IP networks with statistical quality of service (QoS) guarantee. Video sequences are typically compressed according to the emerging MPEG-4 multimedia framework to achieve bandwidth efficiency an content-based interactivity. The original characteristic of MPEG-4 is to provide an integrated object-oriented representation and coding of natural and synthetic audio-visual content for manipulating and transporting over a broad range of communication infrastructures. The originality of this work is to propose a cross-layer approach for resolving some of the critical issues on delivering packet video data over IP networks with satisfactory quality of service. While, current and past works on this topic respect the protocol layer isolation paradigm, the key idea behind our work is to break this limitation and to rather inject content-level semantic and service-level requirement within the proposed IP video transport mechanics and protocols
APA, Harvard, Vancouver, ISO, and other styles
19

Buffet, Julien. "Techniques de protection contre les erreurs pour le streaming audio sur IP." Châtenay-Malabry, Ecole centrale de Paris, 2002. http://www.theses.fr/2002ECAP0857.

Full text
Abstract:
Lorsque l'on transfère en temps réel des données audio au-dessus du service "best effort" donné sur l'Internet, les pertes de données non contrôlées peuvent dégrader significativement la qualité d'écoute. Pour améliorer cette qualité, une politique de protection contre les erreurs est nécessaire. Les techniques de protection contre les erreurs se répartissent en deux types : celles qui dépendent du codage et celles qui en sont indépendantes. Les techniques dépendant du codage s'appuient sur les propriétés du codage sous jacent pour la protection contre les erreurs. Une technique de protection contre les erreurs adaptée au codage MPEG-4 Audio combinant une adaptation au débit, un entrelacement de paquets et une récupération d'erreurs s'appuyant sur la FEC a été développée. Pour l'adaptation de débit et la récupération d'erreur, les propriétés de granularité et de "scalabilité" du flux MPEG-4 Audio sont utilisées. Un mécanisme d'entrelacement s'adaptant au processus de perte est mis en oeuvre. La combinaison de ces mécanismes donne lieu à un protocole "TCP fiendly" pour transferer en temps réel des données MPEG-4 au-dessus d'IP. Il a été impléménté pour l" streaming unicast. La plupart des technique FEC indépendantes du codage sont des adaptations de la théorie générale du codage au cas particulier des erreurs de streaming. Les codes correcteurs tel que les codes Hamming ou les codes Reed-Solomon peuvent détecter et corriger une ou plusieurs erreurs apparaissant sporadiquement dans un canal. Mais dans le streaming Internet qui nous concerne, les erreurs ont déjà été détectées par les protocoles bas niveau, le seul problème est la correctio. C'est un problème bien plus facile que celui de la détection-correction et nous n'avons pas besoin d'utiliser la théorie des codes correcteurs pour y répondre. Une nouvelle méthode dédiée aux problèmes de streaming a été développée en utilisant la théorie des systèmes linéaires sur des corps ou des anneaux finis.
APA, Harvard, Vancouver, ISO, and other styles
20

Hachicha, Khalil. "Algorithmes et architectures électroniques pour l'intégration de la détection de mouvement markovienne aux codeurs vidéo MPEG4 / H264." Paris 6, 2005. http://www.theses.fr/2005PA066410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zeybek, Emre. "Compression multimodale du signal et de l’image en utilisant un seul codeur." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1060/document.

Full text
Abstract:
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée « Compression Multimodale ». Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image « porteuse » tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale
The objective of this thesis is to study and analyze a new compression strategy, whose principle is to compress the data together from multiple modalities by using a single encoder. This approach is called “Multimodal Compression” during which, an image and an audio signal is compressed together by a single image encoder (e.g. a standard), without the need for an integrating audio codec. The basic idea developed in this thesis is to insert samples of a signal by replacing some pixels of the "carrier's image” while preserving the quality of information after the process of encoding and decoding. This technique should not be confused with techniques like watermarking or stéganographie, since Multimodal Compression does not conceal any information with another. Two main objectives of Multimodal Compression are to improve the compression performance in terms of rate-distortion and to optimize the use of material resources of a given embedded system (e.g. acceleration of encoding/decoding time). In this report we study and analyze the variations of Multimodal Compression whose core function is to develop mixing and separation prior to coding and separation. Images and common signals as well as specific data such as biomedical images and signals are validated. This work is concluded by discussing the video of the strategy of Multimodal Compression
APA, Harvard, Vancouver, ISO, and other styles
22

Dvořák, Martin. "Výukový video kodek." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219882.

Full text
Abstract:
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.
APA, Harvard, Vancouver, ISO, and other styles
23

Šiška, Michal. "Ztrátová komprese pohyblivých obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219298.

Full text
Abstract:
This thesis deals with description of lossy video compression. Theoretical part of the work describes the fundamentals of the video compression and standarts for lossy as well lossless video and still image compression. The practical part follows up with design of Java program for simulation of MPEG codec.
APA, Harvard, Vancouver, ISO, and other styles
24

Halbach, Till. "Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-136.

Full text
Abstract:

This dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.

In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.

The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.

A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin.

Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection

APA, Harvard, Vancouver, ISO, and other styles
25

"A Cost Shared Quantization Algorithm and its Implementation for Multi-Standard Video CODECS." Thesis, 2012. http://hdl.handle.net/10388/ETD-2012-12-842.

Full text
Abstract:
The current trend of digital convergence creates the need for the video encoder and decoder system, known as codec in short, that should support multiple video standards on a single platform. In a modern video codec, quantization is a key unit used for video compression. In this thesis, a generalized quantization algorithm and hardware implementation is presented to compute quantized coefficient for six different video codecs including the new developing codec High Efficiency Video Coding (HEVC). HEVC, successor to H.264/MPEG-4 AVC, aims to substantially improve coding efficiency compared to AVC High Profile. The thesis presents a high performance circuit shared architecture that can perform the quantization operation for HEVC, H.264/AVC, AVS, VC-1, MPEG- 2/4 and Motion JPEG (MJPEG). Since HEVC is still in drafting stage, the architecture was designed in such a way that any final changes can be accommodated into the design. The proposed quantizer architecture is completely division free as the division operation is replaced by multiplication, shift and addition operations. The design was implemented on FPGA and later synthesized in CMOS 0.18 μm technology. The results show that the proposed design satisfies the requirement of all codecs with a maximum decoding capability of 60 fps at 187.3 MHz for Xilinx Virtex4 LX60 FPGA of a 1080p HD video. The scheme is also suitable for low-cost implementation in modern multi-codec systems.
APA, Harvard, Vancouver, ISO, and other styles
26

Xian, Lee Si, and 李思賢. "Design of an MPEG Video Codec." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/38791941775292457156.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
88
With the advance of multimedia technologies, the demand of video applications grows dramatically. However, due to the fact that data size of digital video is huge and thus adopting some efficient data compression techniques is required to reduce the storage space and communicates bandwidth for storing and transmitting. This thesis designs a software decoder and encoder that are compiled with MPEG-1 standard. the design include DCT, quantization, VLC, motion estimation units. The research also employs double buffer and multithread techniques to speed up the performance of the encoder and decoder.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Dong-yun, and 劉東昀. "Implementation of software MPEG-4 like video CODEC." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/85788827927244800369.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學系研究所
85
MPEG-4 將以物件為基礎的 ( object-based ) 壓縮方式來打破目前的限制,並開發更 多的多媒體應用。 MPEG-4 的最終目標是想將無線通訊資料,電腦合成資料,和電視影音 資料均能整合於同一應用中,進而制訂出統一的儲存和傳播的規格。本論文以MPEG-4 的 視訊部分做為MPEG-4相關研究的開端,並展示出MPEG-4的雛型和所包含的觀念。 第一章 描述 MPEG-4 的應用範圍, 第二章描述其演算法, 第三章將展示一個 Java 為發展工具的 解碼器, 第四章概述其編碼器之設計, 最後一章為結論. With the success of the MPEG-1 and the MPEG-2 coding standards, digital tel evision is possible today. The newest action of the MPEG committee is to devel op the MPEG-4 standard for multimedia communications. This group aims at provi ding a standard in order to cope with the requirements of current and future m ultimedia applications. The first part of this thesis describes what kind of m ultimedia applications MPEG-4 intends to establish.The Second part of this the sis describes MPEG-4 video encoding and decoding algorithms in the Verificatio n Models, and its functionalities such as object-based coding, user interactio n, decoding downloadability, spatialand temporal scalability, etc.. In the thi rd chapter, a Java-based software video decoder is presented. The forth chapte r describes the implementationof the proposed software MPEG4-like video encode r. In the end of this chapter, the performance of the encoder is presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Huang, Shih-Chia, and 黃士嘉. "Optimization of Video Codec for MPEG-4 and H.264." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/30840957282492149889.

Full text
Abstract:
博士
國立臺灣大學
電機工程學研究所
97
We propose four topics in terms of the spatial error concealment, temporal error concealment, hybrid error concealment approaches at the video decoder and memory management (MM) schemes at the video encoder. Highly compressed video bitstreams transmitted over error-prone communications networks can suffer from packet erasures. In order to avoid error-catalyzed artifacts from producing visible corruption of affected video frames, the use of error concealment (EC) at the video decoder becomes essential, especially in regard to wireless video transmission which can suffer packet loss more easily due to fluctuating channel conditions. Spatial error concealment (SEC) techniques are very useful in the recovery of impaired video sequences, especially in the presence of scene changes, irregular motion, and appearance or disappearance of objects. As errors occur in the first frame, the corrupted MBs must be recovered by utilizing SEC schemes in order to prevent the propagation of errors to the succeeding inter-coded frames. We propose two SEC methods; one conceals the variances of the different kinds of damaged Macroblocks (MBs) targeted at any condition, and the other is speed-up which utilizes a H.264 coding tool, directional spatial intra prediction, in order to conceal the entire spectrum of damaged MBs targeted at intra-coded block(s). Temporal error concealment techniques (TEC) are usually successful when there is continuous high correlation between the frames of the coded sequence. The proposed TEC techniques consist of a novel and unique mathematical model, the optimum regression plane, developed for the repair of damaged motion vectors, and the creation of a framework to perform the variable block size motion compensation based on predictive motion vectors in Laplacian distribution model space for H.264 decoder. We also propose an integrated Hybrid Error Concealment method consisting of both SEC and TEC techniques. Experiments performed using the proposed hybridization method of combining the above spatial and temporal estimation elements fulfilled the expectations of control-whole-scheme. The experimental results show that the proposed method offers excellent gains of up to 10.62dB compared to that of the Joint Model (JM) decoder for a wide range of benchmark sequences without any considerable increase in time demand. The external memory bandwidth for motion estimation is the most critical issue for the limited memory bandwidth and power consumption in the embedded video coding systems. The purpose of this paper is to propose an efficient and innovative memory bandwidth reduction scheme for the video encoder, using the data prediction and data reuse technique. Compared to those of traditional data reuse schemes for fast motion estimation, there is always a tradeoff between the reduction of memory bandwidth and the required internal memory size. Taking advantage of the function of the proposed data prediction and data reuse techniques for fast motion estimation, we significantly reduced the required memory bandwidth and internal memory size. Experiments performed using the proposed enhanced data prediction and data reuse scheme resulted in excellent gains, in some instances only using 37% of external memory bandwidth and 7% of internal memory size compared to the traditional data reuse scheme.
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Wei-Cheng, and 林威丞. "Memory Access Reduction for Low Power MPEG/H.264 Video Codec." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/63336922262743897176.

Full text
Abstract:
博士
國立成功大學
電機工程學系碩博士班
96
Power consumption has become a major concern in the design of mobile multimedia systems using MPEG video compression technology. An MPEG video decoder/encoder involves intensive memory accesses which make the memory subsystem a system performance bottleneck as well as the primary consumer of overall system energy. This dissertation presents several techniques to reduce the number of memory accesses to alleviate the impact caused by the memory subsystem. First, a reusable macroblock detector that exploits the stationary macroblock characteristic to identify the reusable data stored in a frame memory is proposed for both an MPEG-4 simple profile video decoder and an MPEG-4 advanced simple profile video decoder. The experimental results show that reusing these already existing data can eliminate about 25% of memory traffic without any sacrifice in image quality. Next, we present two data-reuse policies to remove redundant memory accesses and avoid the unnecessary operations of motion estimation for H.264 baseline profile video decoder/encoder. The proposed approaches reduce 30% (37%) of memory accesses in the encoder (decoder) and 23% of motion estimation computation without impact on coding efficiency.
APA, Harvard, Vancouver, ISO, and other styles
30

Chiang, Ming-Chang, and 江明昌. "A Study of MPEG-4 Video Object Codec and Rate Control." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/23264826218227805678.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
89
The video compression method is different between MPEG-4 and formerly MPEG-1、MPEG-2. MPEG-1 and MPEG-2 are frame-based compression method while MPEG-4 is object-based. Here the object refers one region in the frame. Several regions in the frame refer to several video objects which encoding is based on. Because object can have any shape, we still need to encode the shape information except the other parameters such as texture and motion. This is the difference between MPEG-4 and formerly standards MPEG-1、MPEG-2. Apart from encoding the objects, the method of object’s composition in a frame needs to be encoded too. At decoding terminal, the compressed bitstream is decoded into several objects and composition information. Then the compositor rearranges the objects in the frame according to composition information and plays the video. Since the compression unit is object, MPEG-4 can achieve many manageable functionalities of object. As for the consideration of transmission of video bitstream on the network, this thesis proposes a rate control scheme according to quantization scale based on frame level and the regulation of quantization parameter based on MB type and its correlation between neighboring MBs. Owning to the consideration of human eyes characteristic, the video produced by our proposal is more acceptable. The experiment result shows that the PSNR of video is higher in our proposal than in TM5 rate control scheme adapted by MPEG-2. Compared with MPEG-4 rate control scheme, our proposal produces a more stable video and reduces the encoding bits a little at the cost of slightly reduction of PSNR.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Chi-Hui, and 黃琪惠. "A Scalable Video Codec Based-on MPEG-4 Still Texture Coding." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/41091413557493086201.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
89
In this thesis, a novel wavelet-based scalable video coding technique based on MPEG-4 still texture coding algorithm is presented. We apply motion estimation and compensation on the lowest frequency subband and motion refinement on all the high frequency sub-bands to effectively remove the temporal redundancy exists between the successive frames of any video sequence. And the MPEG-4 still texture coding algorithm is adopted to code the intra-frames and predicted error frames. The resulting coding efficiency out-performs most of the wavelet-based scalable video codec. Moreover, our proposed codec generates fully embedded bitstreams and provides multiple scalabilities, such as spatial reolution, frame rate, distortion level, and bitrate scalabilities. The adopting of wavelet transform gifts our video codec the spatial scalabil-ity. This gives us an idea that if spatial scalability is of great concern, wavelet-based ap-proach can be a good candidate. The temporal scalability comes out from the careful design of temporal coding pattern and selective dropping of the inter-frames. Through this means, temporal scalability is realized without introducing any overhead. By utiliz-ing bit-plane coding scheme, precise bitrate control and data rate scalability are achieved. Also, the nature of our video coding scheme allows the decoding data rate to be dynamically changed. The ability of automatically adjusting data rate to meet network loading is very appealing to network oriented applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Weng, Yong Quan, and 翁永泉. "Implementation of software MPEG-2 video codec and its related research." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/57086026014706651613.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
84
In this thesis, a software video codec conformed to MP@ML of MPEG-2 video coding standard (ISO/IEC 13818-2) is presented. However, the decoding/encoding capability of MP@HL, MP@H14 video bitstream and decoding capability of 422@ML video bitstream are also preserved. Most modern microprocessors do not have enough computing power to decode MPEG-2 video bitstream in realtime. It is important for a software MPEG-2 decoder to consume the computing resources and bandwidth of a processor as efficiently as possible, which is discussed in detail in this thesis. Two algorithms that speedup the evaluation process of inverse discrete cosine transform are also presented. Motion estimation accounts for most of the execution time during a MPEG-2 bitstream encoding process. Five fast search algorithms for motion estimation, along with the most time- consuming full-range search, are implemented in the software MPEG-2 video encoder. And their performance are compared in different aspects, such as execution time, bitstream size, compression ratio and video quality. The software MPEG-2 video can decode bitstreams of 4Mbits/sec bitrate (704*480 in resolution) in 5.3 to 6.5 frames per second while running on a Pentium-133 system under DOS, or 6.5 to 8 frames per second while running while under Windows 95 with Direct Draw support. The decoding performance can be further improved by applying multimedia instruction set, such as Intel MMX technology. In the software MPEG-2 video encoder''s aspect, a frame can be encoded in an average of 1.3 seconds by using the fastest motion estimation algorithm among the six implemented ones.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Chih-wen, and 黃志文. "Real-time MPEG-4 Video CODEC Design and Realization on Programmable Processors." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/48830606794327253762.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
93
In mobile multimedia products, MPEG-4 video compression plays an important role due to its feature of low bit-rate and high quality. But, MPEG-4 video codec consumes more power because of its high computation complexity. Currently, more and more RISC cores are widely used in mobile multimedia applications because of their low power consumption and multimedia extensions. Design of fully standard-compliant MPEG-4 video codec with real time performance on a RISC processor for embedded applications entails optimizations to the maximum extent possible. This thesis describes about the process of porting MPEG-4 video codec on UniCore platform and it proposes the algorithm level optimization methods to improve the performance of MPEG-4 video codec. Further, this thesis proposes the platform-dependent optimization methods according to the feature of UniCore platform. Finally, adopting the above methods, the MPEG-4 video encoder can reach 30 fps in CIF resolution on UniCore 200MHz platform and decoder can decode CIF resolution video at 70 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
34

Tzeng, Jiann-Shiun, and 曾建勳. "Robust Streaming of MPEG-4 FGS-Coded Video over 802.11b WLAN." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/19582035039151194844.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
92
For reducing cost of wireless gadgets and taking advantage of mobility offered by wireless network, the 802.11 wireless local area networks (WLANs) has become popular in recent years. Although current WLANs are predominantly used for data transfer, the higher bandwidth provided by new WLAN technologies will lead to increasing use for multimedia transmissions. But to transmit a video stream over the WLANs poses several challenges, including bandwidth variation and data loss. In order to provide a reliable and efficient transmission of compressed video over WLANs, we propose to employ Fine-Granular-Scalability (FGS) coding for the compression of video data. The video coded with FGS can provide a continuous video quality adapting to bandwidth variations. Furthermore, we propose a novel strategy which incorporates with unequal error protection for packets of FGS enhancement layer and post-processing for decoded video data. Usually the compressed data of FGS enhancement layer in each packet has different priority. We protect part of data with higher priority, and recover the most important information when a packet is lost during transmission. From the simulation results, we find that the proposed error protection strategies can improve the error resilience and enhance the video quality under the packet loss.
APA, Harvard, Vancouver, ISO, and other styles
35

Ma, Xiao Feng. "Iterative joint source and channel decoding using turbo codes for MPEG-4 video transmission." Thesis, 2004. http://spectrum.library.concordia.ca/7887/1/MQ91076.pdf.

Full text
Abstract:
This thesis presents a novel iterative joint source and channel decoding scheme using turbo codes for MPEG-4 video transmission over noisy channels. The proposed scheme, on one hand, utilizes the channel soft outputs generated by a turbo decoder to assist syntax based error concealment in a source decoder. On the other hand, the residual redundancy extracted by the source decoder is fed back to the channel decoder through modifying the extrinsic information exchanged between the two constituent MAP decoders of the turbo decoder so as to improve the error performance of the turbo decoder. With video packet mixer, the proposed scheme can correct most of the turbo coding blocks with a large number of bit errors. Simulation results show significant improvement in terms of BER, PSNR and the reconstructed video quality.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Ping-Chun, and 陳炳君. "Emulation of the ATM VBR Channel for Supporting MPEG-coded Video Communication." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/05209559787487560424.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
82
Emulation of the ATM VBR channel for supporting MPEG- coded video is to maintain the end-to-end-delay T to be constant and to preserve the originally bit rate fluctuation generated from MPEG encoder. In this thesis , we present a buffering control scheme without the feedback mechanism to emulate the ATM VBR channel. During initialization, the maximum buffer emptying rate is determined according to the maximum buffering delay at the encoder and decoder buffer and the bit rate range of the MPEG encode A message synchronization scheme at the receiver for supporting the playout is analyzed. By using some estimation process and adjustment process, we are able to resolve the delay jitter effect on the message level. But, from the simulation results, the jitter effect will not be resolved at the frame level. Further study is needed.
APA, Harvard, Vancouver, ISO, and other styles
37

Zheng, Shuo Jia, and 鄭碩佳. "A low cost audio/video editing system for the MPEG-1 coded bitstreams." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/74817858452191153326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chao-Ming, Chen, and 陳照明. "A study on the Motion Estimation in the MPEG-4 Video Codec using cubic Spline Interpolation." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/11623118801729782735.

Full text
Abstract:
碩士
樹德科技大學
資訊工程學系
95
Multimedia communication has become the main application of modern network system. Among these digital data, the image and video have the greatest data size. Therefore, the image and video compression algorithms are the key points of the multimedia communication system. A well-designed interpolation can reduce image / video data size before compression. It has been shown that the Cubic Spline Interpolation (CSI) is one of the best interpolation algorithms in the world. One of its applications is to cooperate with MPEG-4 video codec and result in a modified CSI_MPEG-4 video codec. It follows from literatures that the reconstructed video quality of the modified CSI_MPEG-4 video codec is better than that of standard MPEG-4 video codec under the same bit-rates. This thesis studies the half/full pixel motion estimation algorithm in the modified CSI_MPEG-4 video codec. The performance of the half/full pixel motion estimation algorithm in the modified CSI_MPEG-4 video codec is evaluated by using various video clips with different resolutions. Experimental results show that the use of half pixel motion estimation algorithm in the modified CSI_MPEG-4 video codec cannot improve video quality substantially as it works in the standard MPEG-4 video codec. Furthermore, the benefit of using half pixel motion estimation will degrade rapidly when the encoded video bit-rate is increased. In other words, it is unnecessary to use half pixel motion estimation in the high bit-rate situations. These results can be applied to set the encoding parameters of the modified CSI_MPEG-4 video codec.
APA, Harvard, Vancouver, ISO, and other styles
39

Ravi, Aruna. "Performance analysis and comparison of Dirac video codec with H.264 / MPEG-4 part 10 AVC." 2009. http://hdl.handle.net/10106/1740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Σωτηρόπουλος, Κωνσταντίνος. "Υλοποίηση του MPEG-4 Simple Profile CODEC στην πλατφόρμα TMS320DM6437 για επεξεργασία βίντεο σε πραγματικό χρόνο." Thesis, 2013. http://hdl.handle.net/10889/7282.

Full text
Abstract:
Η παρούσα ειδική ερευνητική εργασία εκπονήθηκε στα πλαίσια του Διατμηματικού Προγράμματος Μεταπτυχιακών Σπουδών Ειδίκευσης στα “Συστήματα Επεξεργασίας Σημάτων και Επικοινωνιών” στο Τμήμα Φυσικής του Πανεπιστημίου Πατρών. Αντικείμενο της παρούσας εργασίας είναι η σχεδίαση και ανάπτυξη του MPEG – 4 Simple Profile CODEC στο περιβάλλον Simulink με σκοπό την τελική εκτέλεση του αλγορίθμου DSP που θα προκύψει, στην πλατφόρμα ανάπτυξης TMS320DM6437 EVM. Στο πρώτο κεφάλαιο ορίζεται η έννοια της κωδικοποίησης βίντεο σε πραγματικό χρόνο και περιγράφεται η σύγχυση που επικρατεί γύρω από αυτήν. Επίσης γίνεται μια περιγραφή των επεξεργαστών ψηφιακού σήματος ως προς τα τυπικά χαρακτηριστικά που διαθέτουν, την αρχιτεκτονική τους, την αρχιτεκτονική μνήμης, τα στοιχεία υλικού που διαθέτουν για τη ροή του DSP προγράμματος, ενώ παράλληλα, παρουσιάζεται η ιστορική εξέλιξη των DSPs που οδήγησε στους σύγχρονους DSPs και οι οποίοι, διαθέτουν καλύτερες επιδόσεις από τους προπάτορές τους, και αυτό χάρη στις τεχνολογικές και αρχιτεκτονικές εξελίξεις όπως, οι χαμηλότεροι κανόνες σχεδίασης, η γρήγορη προσπέλαση κρυφής μνήμης δύο επιπέδων, η σχεδίαση του DMA και ενός μεγαλύτερου συστήματος διαύλου. Στο τέλος του κεφαλαίου παρουσιάζεται η αρχιτεκτονική της πλατφόρμας ανάπτυξης TMS320DM6437 EVM καθώς και οι διεπαφές υλικού που διαθέτει για την είσοδο και έξοδο βίντεο/ήχου από αυτήν. Στο δεύτερο κεφάλαιο γίνεται μια εκτενής παρουσίαση των εννοιών που συναντώνται στην κωδικοποίηση βίντεο. Στην αρχή του κεφαλαίου απεικονίζεται το γενικό μοντέλο ενός κωδικοποιητή/αποκωδικοποιητή και βάσει αυτού προχωράμε στην περιγραφή του χρονικού μοντέλου, το οποίο επιβάλλει την πρόβλεψη του τρέχοντος πλαισίου βίντεο χρησιμοποιώντας το προηγούμενο, ενώ παράλληλα, εξηγεί και μεθόδους για την εκτίμηση κίνησης περιοχών (μακρομπλοκ) μέσα στο πλαίσιο ενός βίντεο και το πώς μπορεί να γίνει ο υπολογισμός του σφάλματος κίνησης τους. Στη συνέχεια περιγράφεται το μοντέλο εικόνας το οποίο στην πράξη αποτελείται από τρία συστατικά μέρη: τον μετασχηματισμό (αποσυσχετίζει και συμπιέζει τα δεδομένα), την κβάντιση (μειώνει την ακρίβεια των μετασχηματισμένων δεδομένων) και την ανακατάταξη (ανακατατάσσει τα δεδομένα ούτως ώστε να ομαδοποιήσει μαζί τις σημαντικές τιμές). Οι συντελεστές του μετασχηματισμού μετά την ανακατάταξη και την κωδικοποίηση, μπορούν να κωδικοποιηθούν περαιτέρω με τη χρήση κωδικών μεταβλητού μήκους (Huffman κωδικοποίηση) ή μέσω αριθμητικής κωδικοποίησης. Στο τέλος του κεφαλαίου περιγράφεται το υβριδικό μοντέλο DPCM/DCT CODEC πάνω στον οποίο στηρίζεται και η υλοποίηση του MPEG – 4 Simple Profile CODEC. Στο τρίτο κεφάλαιο ουσιαστικά γίνεται μια περιγραφή των χαρακτηριστικών του MPEG – 4 Simple Profile CODEC, των εργαλείων που χρησιμοποιεί, της έννοιας αντικείμενο που πλέον υπεισέρχεται στην κωδικοποίηση βίντεο καθώς και τα είδη προφίλ και επιπέδων που υποστηρίζει το συγκεκριμένο πρωτόκολλο κωδικοποίησης/αποκωδικοποίησης. Στο τέταρτο κεφάλαιο παρουσιάζεται η υλοποίηση του κωδικοποιητή, του αποκωδικοποιητή του MPEG – 4 Simple Profile CODEC καθώς και των επιμέρους υποσυστημάτων που τους απαρτίζουν. Στο πέμπτο κεφάλαιο περιγράφεται η αλληλεπίδραση του χρήστη με το σύστημα κωδικοποίησης/αποκωδικοποίησης, τι παράμετροι χρειάζονται να δοθούν ως είσοδοι από αυτόν, καθώς και πως είναι δυνατή η χρήση του συγκεκριμένου συστήματος.
This project objective is the design and development of MPEG – 4 Simple Profile CODEC in Simulink environment in order to execute the resulting DSP algorithm on the development platform TMS320DM6437 EVM. The first chapter defines the term of real – time video coding which sometimes is misunderstood by most people. Besides there is a brief description of DSP systems, which includes information about their typical characteristics, their architecture, their memory architecture and the hardware elements provided with in order to support the flow of a DSP program. It is also presented the evolution of DSPs through time, which finally gave the modern DSPs with better performance than their ancestors thanks to the technological and architectonical improvements such as, lower design rules, fast-access two-level cache, (E)DMA circuitry and a wider bus system. At the end of this chapter it is presented the architecture of TMS320DM6437 EVM board and its input/output hardware interfaces for video and sound. At the second chapter there is an extensive presentation of terms found at the science of coding/decoding video. At the beginning of this chapter it is depicted a general model including a video encoder/decoder and this is the reason for the description of temporal model, which includes the prediction of current frame from the previous one, and at the same time it explains the computation methods of macroblock motion estimation and motion compensation. Continuing it is described the image model aparted from three component parts, the transformation (decorrelation and data compression), the quantization (reduces the accuracy of transformed data) and the reordering (reorders data on a way that groups significant values all together). The transform coefficients after reordering and coding, can be further coding by using variable length coding (Huffman coding) or arithmetic coding. At the end of the chapter the hybrid model of DPCM/DCT CODEC is described and this is the one where the implementation of MPEG – 4 Simple Profile CODEC has been set up. At the third chapter there is a description about the characteristics of MPEG – 4 Simple Profile CODEC, the tools used, the “object” term, which appears on video coding/decoding and also what are the profiles and levels supported by the specific video encoding/decoding protocol. Finally it is described how the coding of rectangular frames is done and the Simulink model of MPEG – 4 Simple Profile CODEC which is the base for the implementation of DSP algorithm executed on the development platform. At the forth chapter we present the implementation of MPEG – 4 Simple Profile CODEC encoder/decoder and their partial subsystems. At the fifth chapter it is described the interaction between user and the CODEC, what are the parameters needed to be entered as inputs and how the system can be used.
APA, Harvard, Vancouver, ISO, and other styles
41

Yi-Shin, Tung, and 童怡新. "The Design and Implementation of an MPEG-4 Based Universal Scalable Video Codec in Layered Path-Tree Structure." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/32774070331487808695.

Full text
Abstract:
博士
國立臺灣大學
資訊工程學研究所
90
Video streaming is now in widespread use. Ubiquitously accessing the same video content under different surroundings becomes more and more important and practical. This trend forms a demand of leading the modern codec design to possess the new functionality “scalability”. Nowadays, the codec designers need to care about efficiency and scalability at the same time. The new functionality, scalability, enables a single video stream to be scaled up or down such that it can be applied to different application scenarios. In this thesis, we first address an MPEG-4 universal scalable coder based on a novel “path-tree” layered enhancement structure, in which the following scalabilities can be achieved without losing much coding efficiency: 1) Rate scalability, it enables coded streams to serve varying-bandwidth clients, 2) Spatial and temporal scalabilities, they benefit contents to be represented in different resolutions, 3) SNR scalability, it gifts the capability of progressive display, and 4) Computational scalability, it makes video playing well under different computation-power machines or multitasking environments. Moreover, by properly integrating the fine granularity scalability (FGS), specified by MPEG-4, into the leaf nodes of the designed “path-tree”, the fine granularity rate adjustment (FGR) can be achieved. In short, taking advantages of hybrid sourcing data reduction and quality control in the layered coding, the proposed universal scalable system can produce a highly rate scalable and hybrid-functional stream offline. The generated scalable streams can be used to serve all types of clients, especially suitable for bandwidth-varying Internet and multitasking PCs. A scalable system inevitably shows the content in distinct qualities. However, a well-performed scalable stream requires that any enhancement (no matter it is bandwidth, computation, or any constraint releasing) should be used to improve the video quality most. The quality here is of course the perceived feeling assessed by human beings. Thus, we next introduce some human-perceptual based strategies to advise the scalable encoding process. Depending on the content motion activity, video smoothness and the individual frame quality impact on the presentation quality at different degrees. In addition, the masking effect can conceal some distortions, and different region characteristics imply different precision requirements in the coding process. Based on these two observations, the spatial-temporal quality tradeoff and region sensitivity are taken into considerations when generating the universal scalable streams. The overall presentation becomes more adaptive to content characteristics. The experimental results show that the performance of the proposed coder can provide multiple scalability functionalities and keep coding efficiency well, at the same time. Moreover, the ability of content-adaptive scalable compression makes the proposed coder providing better quality of services for various applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Ku, Chun-wei, and 古君偉. "Design of H.264/MPEG-4 AVC Intra Codec for High Definition Size Still Image and Video Applications." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/70837881410308129559.

Full text
Abstract:
碩士
國立交通大學
電子工程系所
94
For the recent decodes, digital video technology has been popularly used and become a necessary part in our daily life. With the development of digital signal processing and demand of better coding performance, H.264/AVC is regarded as the international video coding standard for the next generation. The new standard can achieve significant bitrate reduction compared to earlier standards but still maintains the video quality with its powerful coding techniques. In these techniques, the spatial intra coding is a newly proposed coding tool with high coding efficiency. The high-quality coding efficiency makes intra coding not only suitable for single-picture video coding but also for still image compression, and even competitve with the latest image coding standard like JPEG2000. However, due to the complicated coding techniques, computational complexity of intra coding is much higher than previous standards as well. Thus, how to reduce the complexity and to design a high-efficient intra coder or decoder without much performace degradation is an important issue. In this thesis, we contribute two hardware implementation of an intra frame codec and a fast intra frame encoder to solve this question. We first propose a baseline intra frame codec architecture with both algorithm-level and system-level optimization. To reduce hardware cost and increase processing speed while providing nearly the same video quality, the hardware-oriented algorithm removes the area-costly plane prediction and enhances the mode decision process with more accurate cost function. In the architecture design, in addition to fast module implementation the process is arranged by the macroblock-level pipelining style together with three scheduling techniques to avoid idle cycles and improve data throughput. The whole codec design finally can support high definition 1280x720 size 30fps real-time video coding at 30fps when clocked at 117MHz and high definition 1920x1080 size decoding at 58MHz respectively. The other work is the baseline intra frame encoder targeted on low-power issues with techniques like fast mode decision algorithm and vairable-pixel parallelism. The mode decision process is shortened by the proposed modified three-step algorithm. Besides, the vairable-pixel parallel datapath can also effectively save almost half of processing cycles and lead to lower frequency requirement. With the technique of interlaced scheduling and three strategies for low-power consideration, the new design has smaller chip area relative to previous designs and can support high definition 1280x720 size 30fps real-time video coding at only 61MHz. In brief, our contributions to H.264/AVC intra coding can be divided into two parts. One contibution is the intra frame codec, which integrates both encoding and decoding processes with minor hardware cost and improvement of processing speed. The other contribution is the fast intra frame encoder, with features of reduction of computational complexity, suppression of frequency requirement, and strategies for low-power issues.
APA, Harvard, Vancouver, ISO, and other styles
43

Dickey, Brian. "Hardware Implementation of a High Speed Deblocking Filter for the H.264 Video Codec." Thesis, 2012. http://hdl.handle.net/10012/6645.

Full text
Abstract:
H.264/MPEG-4 part 10 or Advanced Video Coding (AVC) is a standard for video compression. MPEG-4 is currently one of the most widely used formats for recording, compression and distribution of high definition video. One feature of the AVC codec is the inclusion of an in-loop deblocking filter. The goal of the deblocking filter is to remove blocking artifacts that exist at macroblock boundaries. However, due to the complexity of the deblocking algorithm, the filter can easily account for one-third of the computational complexity of a decoder. In this thesis, a modification to the deblocking algorithm given in the AVC standard is presented. This modification allows the algorithm to finish the filtering of a macroblock to finish twenty clock cycles faster than previous single filter designs. This thesis also presents a hardware architecture of the H.264 deblocking filter to be used in the H.264 decoder. The developed architecture allows the filtering of videos streams using 4:2:2 chroma subsampling and 10-bit pixel precision in real-time. The filter was described in VHDL and synthesized for a Spartan-6 FPGA device. Timing analysis showed that is was capable of filtering a macroblock using 4:2:0 chroma subsampling in 124 clock cycles and 4:2:2 chroma subsampling streams in 162 clock cycles. The filter can also provide real-time deblocking of HDTV video (1920x1080) of up to 988 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
44

"TV interativa baseada na inclusão de informações hipermidia em videos no padrão MPEG." Tese, Biblioteca Digital da Unicamp, 2005. http://libdigi.unicamp.br/document/?code=vtls000347907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography