Littérature scientifique sur le sujet « Video quality prediction »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Video quality prediction ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Video quality prediction"

1

Liu, Yu-xin, Ragip Kurceren et Udit Budhia. « Video classification for video quality prediction ». Journal of Zhejiang University-SCIENCE A 7, no 5 (mai 2006) : 919–26. http://dx.doi.org/10.1631/jzus.2006.a0919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Saad, Michele A., Alan C. Bovik et Christophe Charrier. « Blind Prediction of Natural Video Quality ». IEEE Transactions on Image Processing 23, no 3 (mars 2014) : 1352–65. http://dx.doi.org/10.1109/tip.2014.2299154.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Anegekuh, Louis, Lingfen Sun et Emmanuel Ifeachor. « Encoding and video content based HEVC video quality prediction ». Multimedia Tools and Applications 74, no 11 (22 décembre 2013) : 3715–38. http://dx.doi.org/10.1007/s11042-013-1795-z.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Barkowsky, Marcus, Iñigo Sedano, Kjell Brunnström, Mikołaj Leszczuk et Nicolas Staelens. « Hybrid video quality prediction : reviewing video quality measurement for widening application scope ». Multimedia Tools and Applications 74, no 2 (24 avril 2014) : 323–43. http://dx.doi.org/10.1007/s11042-014-1978-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Hewage, C. T. E. R., S. T. Worrall, S. Dogan et A. M. Kondoz. « Prediction of stereoscopic video quality using objective quality models of 2-D video ». Electronics Letters 44, no 16 (2008) : 963. http://dx.doi.org/10.1049/el:20081562.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chen, Li-Heng, Christos G. Bampis, Zhi Li, Joel Sole et Alan C. Bovik. « Perceptual Video Quality Prediction Emphasizing Chroma Distortions ». IEEE Transactions on Image Processing 30 (2021) : 1408–22. http://dx.doi.org/10.1109/tip.2020.3043127.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Khan, Asiya, Lingfen Sun, Emmanuel Ifeachor, Jose-Oscar Fajardo, Fidel Liberal et Harilaos Koumaras. « Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks ». International Journal of Digital Multimedia Broadcasting 2010 (2010) : 1–17. http://dx.doi.org/10.1155/2010/608138.

Texte intégral
Résumé :
The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS) networks. In order to characterize the Quality of Service (QoS) level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS) and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS). The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Koumaras, Harilaos, C. H. Lin, C. K. Shieh et Anastasios Kourtis. « A framework for end-to-end video quality prediction of MPEG video ». Journal of Visual Communication and Image Representation 21, no 2 (février 2010) : 139–54. http://dx.doi.org/10.1016/j.jvcir.2009.07.005.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Anegekuh, Louis, Lingfen Sun, Emmanuel Jammeh, Is-Haka Mkwawa et Emmanuel Ifeachor. « Content-Based Video Quality Prediction for HEVC Encoded Videos Streamed Over Packet Networks ». IEEE Transactions on Multimedia 17, no 8 (août 2015) : 1323–34. http://dx.doi.org/10.1109/tmm.2015.2444098.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Stojanović, Nenad, Boban Bondžulić, Boban Pavlović, Marko Novčić et Dimitrije Bujaković. « Improving the Prediction Accuracy of Objective Video Quality Evaluation ». Acta Polytechnica Hungarica 17, no 7 (2020) : 219–32. http://dx.doi.org/10.12700/aph.17.7.2020.7.12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Video quality prediction"

1

Khan, Asiya. « Video quality prediction for video over wireless access networks (UMTS and WLAN) ». Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/893.

Texte intégral
Résumé :
Transmission of video content over wireless access networks (in particular, Wireless Local Area Networks (WLAN) and Third Generation Universal Mobile Telecommunication System (3G UMTS)) is growing exponentially and gaining popularity, and is predicted to expose new revenue streams for mobile network operators. However, the success of these video applications over wireless access networks very much depend on meeting the user’s Quality of Service (QoS) requirements. Thus, it is highly desirable to be able to predict and, if appropriate, to control video quality to meet user’s QoS requirements. Video quality is affected by distortions caused by the encoder and the wireless access network. The impact of these distortions is content dependent, but this feature has not been widely used in existing video quality prediction models. The main aim of the project is the development of novel and efficient models for video quality prediction in a non-intrusive way for low bitrate and resolution videos and to demonstrate their application in QoS-driven adaptation schemes for mobile video streaming applications. This led to five main contributions of the thesis as follows:(1) A thorough understanding of the relationships between video quality, wireless access network (UMTS and WLAN) parameters (e.g. packet/block loss, mean burst length and link bandwidth), encoder parameters (e.g. sender bitrate, frame rate) and content type is provided. An understanding of the relationships and interactions between them and their impact on video quality is important as it provides a basis for the development of non-intrusive video quality prediction models.(2) A new content classification method was proposed based on statistical tools as content type was found to be the most important parameter. (3) Efficient regression-based and artificial neural network-based learning models were developed for video quality prediction over WLAN and UMTS access networks. The models are light weight (can be implemented in real time monitoring), provide a measure for user perceived quality, without time consuming subjective tests. The models have potential applications in several other areas, including QoS control and optimization in network planning and content provisioning for network/service providers.(4) The applications of the proposed regression-based models were investigated in (i) optimization of content provisioning and network resource utilization and (ii) A new fuzzy sender bitrate adaptation scheme was presented at the sender side over WLAN and UMTS access networks. (5) Finally, Internet-based subjective tests that captured distortions caused by the encoder and the wireless access network for different types of contents were designed. The database of subjective results has been made available to research community as there is a lack of subjective video quality assessment databases.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Jung, Agata. « Comparison of Video Quality Assessment Methods ». Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15062.

Texte intégral
Résumé :
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
Styles APA, Harvard, Vancouver, ISO, etc.
3

Anegekuh, Louis. « Video content-based QoE prediction for HEVC encoded videos delivered over IP networks ». Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3377.

Texte intégral
Résumé :
The recently released High Efficiency Video Coding (HEVC) standard, which halves the transmission bandwidth requirement of encoded video for almost the same quality when compared to H.264/AVC, and the availability of increased network bandwidth (e.g. from 2 Mbps for 3G networks to almost 100 Mbps for 4G/LTE) have led to the proliferation of video streaming services. Based on these major innovations, the prevalence and diversity of video application are set to increase over the coming years. However, the popularity and success of current and future video applications will depend on the perceived quality of experience (QoE) of end users. How to measure or predict the QoE of delivered services becomes an important and inevitable task for both service and network providers. Video quality can be measured either subjectively or objectively. Subjective quality measurement is the most reliable method of determining the quality of multimedia applications because of its direct link to users’ experience. However, this approach is time consuming and expensive and hence the need for an objective method that can produce results that are comparable with those of subjective testing. In general, video quality is impacted by impairments caused by the encoder and the transmission network. However, videos encoded and transmitted over an error-prone network have different quality measurements even under the same encoder setting and network quality of service (NQoS). This indicates that, in addition to encoder settings and network impairment, there may be other key parameters that impact video quality. In this project, it is hypothesised that video content type is one of the key parameters that may impact the quality of streamed videos. Based on this assertion, parameters related to video content type are extracted and used to develop a single metric that quantifies the content type of different video sequences. The proposed content type metric is then used together with encoding parameter settings and NQoS to develop content-based video quality models that estimate the quality of different video sequences delivered over IP-based network. This project led to the following main contributions: (1) A new metric for quantifying video content type based on the spatiotemporal features extracted from the encoded bitstream. (2) The development of novel subjective test approach for video streaming services. (3) New content-based video quality prediction models for predicting the QoE of video sequences delivered over IP-based networks. The models have been evaluated using subjective and objective methods.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Alreshoodi, Mohammed A. M. « Prediction of quality of experience for video streaming using raw QoS parameters ». Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16566/.

Texte intégral
Résumé :
Along with the rapid growth in consumer adoption of modern portable devices, video streaming is expected to dominate a large share of the global Internet traffic in the near future. Today user experience is becoming a reliable indicator for video service providers and telecommunication operators to convey overall end-to-end system functioning. Towards this, there is a profound need for an efficient Quality of Experience (QoE) monitoring and prediction. QoE is a subjective metric, which deals with user perception and can vary due to the user expectation and context. However, available QoE measurement techniques that adopt a full reference method are impractical in real-time transmission since they require the original video sequence to be available at the receiver’s end. QoE prediction, however, requires a firm understanding of those Quality of Service (QoS) factors that are the most influential on QoE. The main aim of this thesis work is the development of novel and efficient models for video quality prediction in a non-intrusive way and to demonstrate their application in QoE-enabled optimisation schemes for video delivery. In this thesis, the correlation between QoS and QoE is utilized to objectively estimate the QoE. For this, both objective and subjective methods were used to create datasets that represent the correlation between QoS parameters and measured QoE. Firstly, the impact of selected QoS parameters from both encoding and network levels on video QoE is investigated. The obtained QoS/QoE correlation is backed by thorough statistical analysis. Secondly, the development of two novel hybrid non-reference models for predicting video quality using fuzzy logic inference systems (FIS) as a learning-based technique. Finally, attention was move onto demonstrating two applications of the developed FIS prediction model to show how QoE is used to optimise video delivery.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wu, Robin. « Utility Maximization of Machine Learning for Bandwidth Prediction over DASH ». University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613749658784292.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Davies, Sam J. C. « Prediction of human gaze patterns for variable quality video coding and its application to open sign language ». Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503925.

Texte intégral
Résumé :
Technological advances in telecommunications and computing power have driven a massive increase in the availability of digital video, both in the traditional broadcast environment and in on-demand scenarios such as the internet and IPTV. In order to support this increase video compression technologies have developed both to reduce bitrates but also to utilise the additional computing power available. The next step for video compression is commonly thought to be in exploitation of the Human Visual System (HVS) in perceptual coding, although this continues to suffer from the difficulty of evaluating the quality of compressed video. This thesis proposes a perceptual video compression framework - from quality estimation, through gaze estimation to variable quality coding.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Roitzsch, Michael. « Slice-Level Trading of Quality and Performance in Decoding H.264 Video ». Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-26472.

Texte intégral
Résumé :
When a demanding video decoding task requires more CPU resources then available, playback degrades ungracefully today: The decoder skips frames selected arbitrarily or by simple heuristics, which is noticed by the viewer as jerky motion in the good case or as images completely breaking up in the bad case. The latter can happen due to missing reference frames. This thesis provides a way to schedule individual decoding tasks based on a cost for performance trade. Therefore, I will present a way to preprocess a video, generating estimates for the cost in terms of execution time and the performance in terms of perceived visual quality. The granularity of the scheduling decision is a single slice, which leads to a much more fine-grained approach than dealing with entire frames. Together with an actual scheduler implementation that uses the generated estimates, this work allows for higher perceived quality video playback in case of CPU overload
Wenn eine anspruchsvolle Video-Dekodierung mehr Prozessor-Ressourcen benötigt, als verfügbar sind, dann verschlechtert sich die Abspielqualität mit aktuellen Methoden drastisch: Willkürlich oder mit einfachen Heuristiken ausgewählten Bilder werden nicht dekodiert. Diese Auslassung nimmt der Betrachter im günstigsten Fall nur als ruckelnde Bewegung wahr, im ungünstigen Fall jedoch als komplettes Zusammenbrechen nachfolgender Bilder durch Folgefehler im Dekodierprozess. Meine Arbeit ermöglicht es, einzelne Teilaufgaben des Dekodierprozesses anhand einer Kosten-Nutzen-Analyse einzuplanen. Dafür ermittle ich die Kosten im Sinne von Rechenzeitbedarf und den Nutzen im Sinne von visueller Qualität für einzelne Slices eines H.264 Videos. Zusammen mit einer Implementierung eines Schedulers, der diese Werte nutzt, erlaubt meine Arbeit höhere vom Betrachter wahrgenommene Videoqualität bei knapper Prozessorzeit
Styles APA, Harvard, Vancouver, ISO, etc.
8

Javadtalab, Abbas. « An End-to-End Solution for High Definition Video Conferencing over Best-Effort Networks ». Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/31954.

Texte intégral
Résumé :
Video streaming applications over best-effort networks, such as the Internet, have become very popular among Internet users. Watching live sports and news, renting movies, watching clips online, making video calls, and participating in videoconferences are typical video applications that millions of people use daily. One of the most challenging aspects of video communication is the proper transmission of video in various network bandwidth conditions. Currently, various devices with different processing powers and various connection speeds (2G, 3G, Wi-Fi, and LTE) are used to access video over the Internet, which offers best-effort services only. Skype, ooVoo, Yahoo Messenger, and Zoom are some well-known applications employed on a daily basis by people throughout the world; however, best-effort networks are characterized by dynamic and unpredictable changes in the available bandwidth, which adversely affect the quality of the video. For the average consumer, there is no guarantee of receiving an exact amount of bandwidth for sending or receiving video data. Therefore, the video delivery system must use a bandwidth adaptation mechanism to deliver video content properly. Otherwise, bandwidth variations will lead to degradation in video quality or, in the worst case, disrupt the entire service. This is especially problematic for videoconferencing (VC) because of the bulkiness of the video, the stringent bandwidth demands, and the delay constraints. Furthermore, for business grade VC, which uses high definition videoconferencing (HDVC), user expectations regarding video quality are much higher than they are for ordinary VC. To manage network fluctuations and handle the video traffic, two major components in the system should be improved: the video encoder and the congestion control. The video encoder is responsible for compressing raw video captured by a camera and generating a bitstream. In addition to the efficiency of the encoder and compression speed, its output flow is also important. Though the nature of video content may make it impossible to generate a constant bitstream for a long period of time, the encoder must generate a flow around the given bitrate. While the encoder generates the video traffic around the given bitrate, congestion management plays a key role in determining the current available bandwidth. This can be done by analyzing the statistics of the sent/received packets, applying mathematical models, updating parameters, and informing the encoder. The performance of the whole system is related to the in-line collaboration of the encoder and the congestion management, in which the congestion control system detects and calculates the available bandwidth for a specific period of time, preferably per incoming packet, and informs rate control (RC) to adapt its bitrate in a reasonable time frame, so that the network oscillations do not affect the perceived quality on the decoder side and do not impose adverse effects on the video session. To address these problems, this thesis proposes a collaborative management architecture that monitors the network situation and manages the encoded video rate. The goal of this architecture is twofold: First, it aims to monitor the available network bandwidth, to predict network behavior and to pass that information to the encoder. So encoder can encode a suitable video bitrate. Second, by using a smart rate controller, it aims for an optimal adaptation of the encoder output bitrate to the bitrate determined by congestion control. Merging RC operations and network congestion management, to provide a reliable infrastructure for HDVC over the Internet, represents a unique approach. The primary motivation behind this project is that by applying videoconference features, which are explained in the rate controller and congestion management chapter, the HDVC application becomes feasible and reliable for the business grade application even in the best-effort networks such as the Internet.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Rius-Vilarrasa, Elisenda. « Evaluation of a Video Image Analysis system for the prediction of carcass and meat quality in genetic improvement programmes ». Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4394.

Texte intégral
Résumé :
Video Image Analysis (VIA) is a digital camera based technology that extracts relevant information from images using purpose tailored image processing software. In the present work, the VSS2000 image analysis system from E+V Technology GmbH has been used in a large lamb abattoir to determine the value of carcasses in an objective, consistent and automated way. In this thesis results are reported of several experiments conducted within the framework of two UK-funded projects. The aims of the research were (i) the calibration and validation of the VIAtechnique for the evaluation of lamb carcasses under UK abattoir conditions, with the view to scientifically examine the accuracy and precision of information from the VIA systems as the basis for a value-based marketing system, (ii) to investigate the use of VIA measurements (weights of primal meat yields and carcass dimensional measurements) in sheep breeding programmes to improve carcass and meat quality and (iii) to evaluate the potential of this technology to reward increased carcass quality associated with the use of breeding strategies based on the inclusion of a quantitative trait locus (QTL) for improved muscularity. Accuracy, precision and consistency of The Meat and Livestock Commission (MLC) carcass classification scheme, currently used in UK abattoirs to evaluate carcass quality, was compared against the VIA system in the prediction of various primal joint weights. The results highlighted the advantage of the VIA system being on average 2% more accurate (measured as coefficient of determination: R2) and 12% more precise (measured as root meat squared error: RMSE) in predicting weight of primal meat yields (leg, chump, loin, breast and shoulder) of the lamb carcasses than the MLC carcass classification scheme. The genetic analysis of VIA-based predicted primal joint weights showed substantial additive genetic variance, suggesting that their use in sheep breeding programmes could improve carcass quality either by an improvement of conformation or by an increased weight of the most valuable primal cuts, without an increase in fatness. Favourable associations between VIA primal weights and performance traits indicate that selection based on VIA traits is possible without a negative effect on average daily gain, live weight and cold carcass weight. Although computer tomography (CT) and dissection found in related studies significant effects of a Texel muscling-QTL (TM-QTL) for increased muscularity in the loin region, in the present study they could not be identified by both, the current industry carcass evaluation system for conformation and fatness and the VIA system. A calibration of the VIA system against CT measurements resulted in improved VIA prediction equations for primal meat yields and also showed a moderate potential to estimate loin muscle traits measured by CT and to detect partially the effect of the TM-QTL on these traits. The results of the research demonstrated that VIA is a consistent method to measure carcass composition and that it improved the prediction (accuracy and precision) of primal meat yields compared to the present MLC scoring system. The estimated genetic parameters for VIA primal meat yields suggested that selection for increased lean meat yield from lamb carcass measured using VIA can contribute to genetic improvement of carcass quality without increasing carcass fatness. The results suggest that VIA technology installed in abattoirs could provide the means for the development of a value-based marketing system by paying for weights of the most valuable primal cuts measured using VIA.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sun, Jiong. « Football on mobile phones : algorithms, architectures and quality of experience in streaming video ». Doctoral thesis, Umeå : Department of Applied Physics and Electronics, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-831.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Chapitres de livres sur le sujet "Video quality prediction"

1

Lau, Chun Pong, Xiangliang Zhang et Basem Shihada. « Video Quality Prediction over Wireless 4G ». Dans Advances in Knowledge Discovery and Data Mining, 414–25. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37456-2_35.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Csizmar Dalal, Amy, Emily Kawaler et Sam Tucker. « Towards Real-Time Stream Quality Prediction : Predicting Video Stream Quality from Partial Stream Information ». Dans Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 20–33. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10625-5_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhu, Kanghua, Yongfang Wang, Jian Wu, Yun Zhu et Wei Zhang. « Content Oriented Video Quality Prediction for HEVC Encoded Stream ». Dans Communications in Computer and Information Science, 338–48. Singapore : Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4211-9_33.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Ren-Jie, Yan-Ting Jiang, Jiunn-Tsair Fang et Pao-Chi Chang. « Quality Estimation for H.264/SVC Inter-layer Residual Prediction in Spatial Scalability ». Dans Advances in Image and Video Technology, 252–61. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25346-1_23.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Moorthy, Anush K., et Alan C. Bovik. « Automatic Prediction of Perceptual Video Quality : Recent Trends and Research Directions ». Dans Signals and Communication Technology, 3–23. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12802-8_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cheung, Hoi-Kok, et Wan-Chi Siu. « Replacing Conventional Motion Estimation with Affine Motion Prediction for High-Quality Video Coding ». Dans The Era of Interactive Media, 145–64. New York, NY : Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3501-3_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yonis, K., et R. M. Dansereau. « Video Quality Prediction Using a 3D Dual-Tree Complex Wavelet Structural Similarity Index ». Dans Lecture Notes in Computer Science, 359–67. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13681-8_42.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zhou, Wei, Zhibo Chen et Weiping Li. « Stereoscopic Video Quality Prediction Based on End-to-End Dual Stream Deep Neural Networks ». Dans Advances in Multimedia Information Processing – PCM 2018, 482–92. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00764-5_44.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yu, Yue, Yu Liu et Yumei Wang. « Quality of Experience Prediction of HTTP Video Streaming in Mobile Network with Random Forest ». Dans Communications and Networking, 82–91. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06161-6_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Pitas, Charalampos N., Apostolos G. Fertis, Athanasios D. Panagopoulos et Philip Constantinou. « Robust Optimization in Non-Linear Regression for Speech and Video Quality Prediction in Mobile Multimedia Networks ». Dans Operations Research Proceedings, 381–86. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29210-1_61.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Video quality prediction"

1

Alizadeh, M., et M. Sharifkhani. « Subjective video quality prediction based on objective video quality metrics ». Dans 2018 4th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS). IEEE, 2018. http://dx.doi.org/10.1109/icspis.2018.8700561.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Davis, Andrew G., Damien Bayart et David S. Hands. « Hybrid no-reference video quality prediction ». Dans 2009 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). IEEE, 2009. http://dx.doi.org/10.1109/isbmsb.2009.5133783.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tu, Zhengzhong, Chia-Ju Chen, Yilin Wang, Neil Birkbeck, Balu Adsumilli et Alan C. Bovik. « Efficient User-Generated Video Quality Prediction ». Dans 2021 Picture Coding Symposium (PCS). IEEE, 2021. http://dx.doi.org/10.1109/pcs50896.2021.9477483.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Izima, Obinna, Ruairi de Frein et Mark Davis. « Video Quality Prediction Under Time-Varying Loads ». Dans 2018 IEEE International Conference on Cloud Computing Technology and Science (CloudCom). IEEE, 2018. http://dx.doi.org/10.1109/cloudcom2018.2018.00035.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Duanmu, Zhengfang, Abdul Rehman, Kai Zeng et Zhou Wang. « Quality-of-experience prediction for streaming video ». Dans 2016 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2016. http://dx.doi.org/10.1109/icme.2016.7552859.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wang, Beibei, Dekun Zou et Ran Ding. « Support Vector Regression Based Video Quality Prediction ». Dans 2011 IEEE International Symposium on Multimedia (ISM). IEEE, 2011. http://dx.doi.org/10.1109/ism.2011.84.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Mittal, Anish, Michele A. Saad et Alan C. Bovik. « Zero shot prediction of video quality using intrinsic video statistics ». Dans IS&T/SPIE Electronic Imaging, sous la direction de Bernice E. Rogowitz, Thrasyvoulos N. Pappas et Huib de Ridder. SPIE, 2014. http://dx.doi.org/10.1117/12.2036162.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Schiffner, Falk, et Sebastian Moller. « Direct Scaling & ; Quality Prediction for perceptual Video Quality Dimensions ». Dans 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2018. http://dx.doi.org/10.1109/qomex.2018.8463431.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Da Silva, Renato, Luiz Brito, Marcelo Albertini, Marcelo Do Nascimento et André Backes. « Using CNNs for Quality Assessment of No-Reference and Full-Reference Compressed-Video Frames ». Dans Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wvc.2020.13484.

Texte intégral
Résumé :
For videos to be streamed, they have to be coded and sent to users as signals that are decoded back to be reproduced. This coding-decoding process may result in distortion that can bring differences in the quality perception of the content, consequently, influencing user experience. The approach proposed by Bosse et al. [1] suggests an Image Quality Assessment (IQA) method using an automated process. They use image datasets prelabeled with quality scores to perform a Convolutional Neural Network (CNN) training. Then, based on the CNN models, they are able to perform predictions of image quality using both Full- Reference (FR) and No-Reference (NR) evaluation. In this paper, we explore these methods exposing the CNN quality prediction to images extracted from actual videos. Various quality compression levels were applied to them as well as two different video codecs. We also evaluated how their models perform while predicting human visual perception of quality in scenarios where there is no human pre-evaluation, observing its behavior along with metrics such as SSIM and PSNR. We observe that FR model is able to better infer human perception of quality for compressed videos. Differently, NR model does not show the same behaviour for most of the evaluated videos.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Keimel, Christian, Tobias Oelbaum et Klaus Diepold. « Improving the prediction accuracy of video quality metrics ». Dans 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5496299.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie