To see the other types of publications on this topic, follow the link: Video compression algorithms.

Journal articles on the topic 'Video compression algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video compression algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (2023): 36–42. http://dx.doi.org/10.46610/joits.2022.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
2

P, Srividya. "Optimization of Lossless Compression Algorithms using Multithreading." Journal of Information Technology and Sciences 9, no. 1 (2023): 36–42. http://dx.doi.org/10.46610/joits.2023.v09i01.005.

Full text
Abstract:
The process of reducing the number of bits required to characterize data is referred to as compression. The advantages of compression include a reduction in the time taken to transfer data from one point to another, and a reduction in the cost required for the storage space and network bandwidth. There are two types of compression algorithms namely lossy compression algorithm and lossless compression algorithm. Lossy algorithms find utility in compressing audio and video signals whereas lossless algorithms are used in compressing text messages. The advent of the internet and its worldwide usage has not only raised the utility but also the storage of text, audio and video files. These multimedia files demand more storage space as compared to traditional files. This has given rise to the requirement for an efficient compression algorithm. There is a considerable improvement in the computing performance of the machines due to the advent of the multi-core processor. However, this multi-core architecture is not used by compression algorithms. This paper shows the implementation of lossless compression algorithms namely the Lempel-Ziv-Markov Algorithm, BZip2 and ZLIB algorithms using the concept of multithreading. The results obtained prove that the ZLIB algorithm proves to be more efficient in terms of the time taken to compress and decompress the text. The comparison is done for both compressions without multithreading and compression with multi-threading.
APA, Harvard, Vancouver, ISO, and other styles
3

Rajasekhar, H., and B. Prabhakara Rao. "An Efficient Video Compression Technique Using Watershed Algorithm and JPEG-LS Encoding." Journal of Computational and Theoretical Nanoscience 13, no. 10 (2016): 6671–79. http://dx.doi.org/10.1166/jctn.2016.5613.

Full text
Abstract:
In the previous video compression method, the videos were segmented by using the novel motion estimation algorithm with aid of watershed method. But, the compression ratio (CR) of compression with novel motion estimation algorithm was not giving an adequate result. Moreover this methods performance is needed to be improved in the encoding and decoding processes. Because most of the video compression methods have utilized encoding techniques like JPEG, Run Length, Huffman coding and LSK encoding. The improvement of the encoding techniques in the compression process will improve the compression result. Hence, to overcome these drawbacks, we intended to propose a new video compression method with renowned encoding technique. In this proposed video compression method, the input video frames motion vectors are estimated by applying watershed and ARS-ST (Adaptive Rood Search with Spatio-Temporal) algorithms. After that, the vector blocks which have high difference value are encoded by using the JPEG-LS encoder. JPEG-LS have excellent coding and computational efficiency, and it outperforms JPEG2000 and many other image compression methods. This algorithm is of relatively low complexity, low storage requirement and its compression capability is efficient enough. To get the compressed video, the encoded blocks are subsequently decoded by JPEG-LS. The implementation result shows the effectiveness of proposed method, in compressing more number of videos. The performance of our proposed video compression method is evaluated by comparing the result of proposed method with the existing video compression techniques. The comparison result shows that our proposed method acquires high-quality compression ratio and PSNR for the number of testing videos than the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Fitrya, Soraya Ainun. "Perbandingan Algoritma Elias Omega Code Dan Elias Delta Code Dalam Mengkompresi File Video (Mp4)." Bulletin of Information System Research 1, no. 3 (2023): 110–19. https://doi.org/10.62866/bios.v1i3.29.

Full text
Abstract:
The video file size is quite large where the better the video file quality and the longer the video file time, the file size is quite large in storing the file. With the video file size which greatly affects the length of time the video file is sent. The solution to this problem is to compress the video file. There are many algorithms in video file compression, for example the elias omega code algorithm, elias delta code, stout code, punctured elias code and many other compression algorithms. So with so many algorithms, it is necessary to test several compression algorithms or algorithm comparisons. Algorithm comparison aims to find out which algorithm is more accurate in carrying out the data compression process. The algorithms that will be compared are elias omega code and elias delta code. While the comparison parameters of the two algorithms are the compression ratio and space saving.
APA, Harvard, Vancouver, ISO, and other styles
5

Kadhim, Amal Abbas, Azal Minshed Abid, and Zuhair Hussein Ali. "Subject Review: Video Compression Algorithms." International Journal of Engineering Research and Advanced Technology 06, no. 11 (2020): 21–25. http://dx.doi.org/10.31695/ijerat.2020.3668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Ye. "An investigation of machine learning-based video compression techniques." Applied and Computational Engineering 47, no. 1 (2024): 23–27. http://dx.doi.org/10.54254/2755-2721/47/20241113.

Full text
Abstract:
As video technology continues to seamlessly weave itself into the fabric of daily life, there is a growing need for enhanced storage and efficient video transmission. This surge in demand has led to heightened expectations and standards for video compression technology. Machine learning as an up-and-coming technology can play its advantages in the field of video compression. This article reviews the current state of research on combining video compression techniques with machine learning. The article provides an overview of various research avenues for enhancement, spanning from conventional video compression algorithms to the fusion of traditional compression frameworks with machine learning methodologies, and even the development of novel end-to-end compression algorithms. In additional, the article explores the possible various application scenarios of machine learning-based video compression algorithms based on the characteristics of such non-standard and arithmetic demanding algorithms. At the end, the article speculates on the future of video compression algorithms based on the content of the various studies reviewed in the article.
APA, Harvard, Vancouver, ISO, and other styles
7

Mochurad, Lesia. "A Comparison of Machine Learning-Based and Conventional Technologies for Video Compression." Technologies 12, no. 4 (2024): 52. http://dx.doi.org/10.3390/technologies12040052.

Full text
Abstract:
The growing demand for high-quality video transmission over bandwidth-constrained networks and the increasing availability of video content have led to the need for efficient storage and distribution of large video files. To improve the latter, this article offers a comparison of six video compression methods without loss of quality. Particularly, H.255, VP9, AV1, convolutional neural network (CNN), recurrent neural network (RNN), and deep autoencoder (DAE). The proposed decision is to use a dataset of high-quality videos to implement and compare the performance of classical compression algorithms and algorithms based on machine learning. Evaluations of the compression efficiency and the quality of the received images were made on the basis of two metrics: PSNR and SSIM. This comparison revealed the strengths and weaknesses of each approach and provided insights into how machine learning algorithms can be optimized in future research. In general, it contributed to the development of more efficient and effective video compression algorithms that can be useful for a wide range of applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Et. al., G. Megala,. "State-Of-The-Art In Video Processing: Compression, Optimization And Retrieval." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (2021): 1256–72. http://dx.doi.org/10.17762/turcomat.v12i5.1793.

Full text
Abstract:
Video compression plays a vital role in the modern social media networking with plethora of multimedia applications. It empowers transmission medium to competently transfer videos and enable resources to store the video efficiently. Nowadays high-resolution video data are transferred through the communication channel having high bit rate in order to send multiple compressed videos. There are many advances in transmission ability, efficient storage ways of these compressed video where compression is the primary task involved in multimedia services. This paper summarizes the compression standards, describes the main concepts involved in video coding. Video compression performs conversion of large raw bits of video sequence into a small compact one, achieving high compression ratio with good video perceptual quality. Removing redundant information is the main task in the video sequence compression. A survey on various block matching algorithms, quantization and entropy coding are focused. It is found that many of the methods having computational complexities needs improvement with optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Pandit, Shraddha, Piyush Kumar Shukla, Akhilesh Tiwari, Prashant Kumar Shukla, Manish Maheshwari, and Rachana Dubey. "Review of video compression techniques based on fractal transform function and swarm intelligence." International Journal of Modern Physics B 34, no. 08 (2020): 2050061. http://dx.doi.org/10.1142/s0217979220500617.

Full text
Abstract:
Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructured way of data processing which computes in less time with no compression over the data, multimedia data are processing deals with a processing requirement algorithm where compression is needed. This involve processing of video and their frames and compression in short forms such that the fast processing of storage as well as the access can be performed. There are different ways of performing compression, such as fractal compression, wavelet transform, compressive sensing, contractive transformation and other ways. One way of performing such a compression is working with the high frequency component of multimedia data. One of the most recent topics is fractal transformation which follows the block symmetry and archives high compression ratio. Yet, there are limitations such as working with speed and its cost while performing proper encoding and decoding using fractal compression. Swarm optimization and other related algorithms make it usable along with fractal compression function. In this paper, we review multiple algorithms in the field of fractal-based video compression and swarm intelligence for problems of optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Prajapati, Y. N., and M. K. Srivastava. "Novel algorithms for protective digital privacy." IAES International Journal of Robotics and Automation (IJRA) 8, no. 3 (2019): 184–88. https://doi.org/10.11591/ijra.v8i3.pp184-188.

Full text
Abstract:
Video is the recording, reproducing, or broadcasting of moving visual images. Visual multimedia source that combines a sequence of images to form a moving picture. The video transmits a signal to a screen and processes the order in which the screen captures should be shown. Videos usually have audio components that correspond with the pictures being shown on the screen. Video compression technologies are about reducing and removing redundant video data so that a digital video file can be effectively sent over a network and stored on computer disks. With efficient compression techniques, a significant reduction in file size can be achieved with little or no adverse effect on the visual quality. The video quality, however, can be affected if the file size is further lowered by raising the compression level for a given compression technique. Security is about the protection of assets. Security, in information technology (IT), is the defense of digital information and IT assets against internal and external, malicious and accidental threats. This defense includes detection, prevention and response to threats through the use of security policies, software tools and IT services. Security refers to protective digital privacy measures that are applied to prevent unauthorized access to computers, databases and websites. Cryptography is closely related to the disciplines of cryptology and cryptanalysis. Cryptography includes techniques such as microdots, merging words with images, and other ways to hide information in storage or transit. However, in today's computer-centric world, cryptography is most often associated with scrambling plaintext (ordinary text, sometimes referred to as clear text into cipher text (a process called encryption), then back again (known as decryption). Cryptography is evergreen and developments. Cryptography protects users by providing functionality for the encryption of data and authentication of other users. Compression is the process of reducing the number of bits or bytes needed to represent a given set of data. It allows saving more data. The project aims to implement security algorithm for data security. The data will be first encrypted using security techniques and that are done at the same time then it takes less processing time and more speed compression techniques will applied. If encryption and compression are done at the same time then it takes less processing time and more speed.
APA, Harvard, Vancouver, ISO, and other styles
11

Bolbakov, R. G., V. A. Mordvinov, and A. D. Makarevich. "Comparative analysis of compression algorithms for four-dimensional light fields." Russian Technological Journal 10, no. 4 (2022): 7–17. http://dx.doi.org/10.32362/2500-316x-2022-10-4-7-17.

Full text
Abstract:
Objectives. The widespread use of systems for capturing light fields is due to the high quality of the reproduced image. This type of capture, although qualitatively superior to traditional methods to capturing volumetric images, generates a huge amount of data needed to reconstruct the original captured 4D light field. The purpose of the work is to consider traditional and extended to four-dimensional image compression algorithms, to perform a comparative analysis and determine the most suitable.Methods. Mathematical methods of signal processing and methods of statistical analysis are used.Results. Algorithms are compared and analyzed in relation to the compression of four-dimensional light fields using the PSNR metric. The selected evaluation criterion is affected not only by the dimension of the compression algorithm, but also by the distance of the baseline of the capture setting, since the difference between images increases with the distance between the optical centers of each camera matrix. Thus, for installations consisting of an array of machine vision cameras located on racks and placed in a room, the obvious choice would be to use conventional image compression methods. Furthermore, based on the assessment of the arbitrariness of video compression methods, it should be noted that the XVC algorithm remains undervalued, although its results are higher. Algorithm AV1 can be considered the next in order of importance. It has been established that the latest compression algorithms show higher performance if compared to their predecessors. It has also been shown that with a small distance between the optical centers of the captured images, the use of video compression algorithms is preferable to the use of image compression algorithms, since they show better results in both three-dimensional and four-dimensional versions.Conclusions. A comparison of the results obtained shows the need to use algorithms from the video compression family (XVC, AV1) on installations with a long baseline (mounted on camera stands). When working with integrated light field cameras (Lytro) and setting the capture with a short baseline, it is recommended to use image compression algorithms (JPEG). In general, video compression algorithms are recommended, in particular XVC, since on average it shows an acceptable level of PSNR in both the case of a short and long installation baseline.
APA, Harvard, Vancouver, ISO, and other styles
12

Gill, Harsimranjit Singh, Tarandip Singh, Baldeep Kaur, Gurjot Singh Gaba, Mehedi Masud, and Mohammed Baz. "A Metaheuristic Approach to Secure Multimedia Big Data for IoT-Based Smart City Applications." Wireless Communications and Mobile Computing 2021 (October 4, 2021): 1–10. http://dx.doi.org/10.1155/2021/7147940.

Full text
Abstract:
Media streaming falls into the category of Big Data. Regardless of the video duration, an enormous amount of information is encoded in accordance with standardized algorithms of videos. In the transmission of videos, the intended recipient is allowed to receive a copy of the broadcasted video; however, the adversary also has access to it which poses a serious concern to the data confidentiality and availability. In this paper, a cryptographic algorithm, Advanced Encryption Standard, is used to conceal the information from malicious intruders. However, in order to utilize fewer system resources, video information is compressed before its encryption. Various compression algorithms such as Discrete Cosine Transform, Integer Wavelet transforms, and Huffman coding are employed to reduce the enormous size of videos. moving picture expert group is a standard employed in video broadcasting, and it constitutes of different frame types, viz., I, B, and P frames. Later, two frame types carry similar information as of foremost type. Even I frame is to be processed and compressed with the abovementioned schemes to discard any redundant information from it. However, I frame embraces an abundance of new information; thus, encryption of this frame is sufficient enough to safeguard the whole video. The introduction of various compression algorithms can further increase the encryption time of one frame. The performance parameters such as PSNR and compression ratio are examined to further analyze the proposed model’s effectiveness. Therefore, the presented approach has superiority over the other schemes when the speed of encryption and processing of data are taken into consideration. After the reversal of the complete system, we have observed no major impact on the quality of the deciphered video. Simulation results ensure that the presented architecture is an efficient method for enciphering the video information.
APA, Harvard, Vancouver, ISO, and other styles
13

Hrytsko, T. L., D. Lenskiy, and V. S. Hlukhov. "REVIEW OF THE CAPABILITIES OF THE JPEG-LS ALGORITHM FOR ITS USE WITH EARTH SURFACE SCANNERS." Computer systems and network 6, no. 2 (2024): 14–24. https://doi.org/10.23939/csn2024.02.014.

Full text
Abstract:
The article explores the possibilities of implementing the JPEG-LS image compression algorithm on Field Programmable Gate Arrays (FPGA) for processing monochrome video streams from Earth surface scanners. A comparison of software implementations of the algorithms, their compression ratio, and execution time is conducted. Methods for improving FPGA performance are considered, using parallel data processing and optimized data structures to accelerate compression and decompression processes. Test results of the software implementation of the algorithm show an average processing speed of 179.2 Mbit/s during compression and 169.6 Mbit/s during decompression. A compression ratio from 1.2 to 7.4 can be achieved depending on the complexity of the image. Key words: FPGA, JPEG-LS, Field-programmable gate arrays, Image compression, Image processing, Video compression, Video stream processing.
APA, Harvard, Vancouver, ISO, and other styles
14

Hrytsko, T. L., D. Lenskiy, and V. S. Hlukhov. "REVIEW OF THE CAPABILITIES OF THE JPEG-LS ALGORITHM FOR ITS USE WITH EARTH SURFACE SCANNERS." Computer systems and network 6, no. 2 (2024): 15–25. https://doi.org/10.23939/csn2024.02.015.

Full text
Abstract:
The article explores the possibilities of implementing the JPEG-LS image compression algorithm on Field Programmable Gate Arrays (FPGA) for processing monochrome video streams from Earth surface scanners. A comparison of software implementations of the algorithms, their compression ratio, and execution time is conducted. Methods for improving FPGA performance are considered, using parallel data processing and optimized data structures to accelerate compression and decompression processes. Test results of the software implementation of the algorithm show an average processing speed of 179.2 Mbit/s during compression and 169.6 Mbit/s during decompression. A compression ratio from 1.2 to 7.4 can be achieved depending on the complexity of the image. Key words: FPGA, JPEG-LS, Field-programmable gate arrays, Image compression, Image processing, Video compression, Video stream processing.
APA, Harvard, Vancouver, ISO, and other styles
15

Arjun, Mantri, Kathiriya Satish, and S. Yadav Purshotam. "Optimizing Video Encoding and Streaming Quality on Social Media Platforms." Journal of Scientific and Engineering Research 9, no. 12 (2022): 177–81. https://doi.org/10.5281/zenodo.13348093.

Full text
Abstract:
In the era of digital communication, social media platforms have become essential for sharing video content. To meet user expectations for high-quality video without buffering or lag, these platforms employ sophisticated video encoding and streaming techniques. This paper examines methods for optimizing video encoding and streaming quality, focusing on adaptive bitrate streaming (ABR) and advanced video compression algorithms. ABR adjusts video quality in real-time based on network conditions, while compression algorithms like H.264, H.265 (HEVC), and VP9 reduce file sizes without compromising quality. These technologies ensure a seamless viewing experience across various devices and network environments. By dynamically adapting to changing conditions and efficiently compressing video data, social media platforms can enhance user satisfaction and operational efficiency. Future directions include addressing latency issues, ensuring compatibility, and integrating artificial intelligence to further optimize video streaming..
APA, Harvard, Vancouver, ISO, and other styles
16

Hu, Yu. "Some Technologies about Video Compression." Advanced Materials Research 393-395 (November 2011): 284–87. http://dx.doi.org/10.4028/www.scientific.net/amr.393-395.284.

Full text
Abstract:
Many strategies can be developed into mature algorithms to compress video more efficiently than today’s standardized codecs. Future video compression algorithms may employ more adaptability, more refined temporal and spatial prediction models with better distortion metrics. The cost to users is the significant increase of implementation complexity at both the encoder and decoder. Fortunately, it seems that bitrates have a slower doubling time than computing power, so the disadvantage of increasing implementation complexity may one day be balanced with much improved processor capabilities. Development trends and perspectives of video compression analyzed in the following paper, highlighting problems and research directions are also analyzed.
APA, Harvard, Vancouver, ISO, and other styles
17

Prajapati, Y. N., and M. K. Srivastava. "Novel algorithms for protective digital privacy." IAES International Journal of Robotics and Automation (IJRA) 8, no. 3 (2019): 184. http://dx.doi.org/10.11591/ijra.v8i3.pp184-188.

Full text
Abstract:
Video is the recording, reproducing, or broadcasting of moving visual images. Visual multimedia source that combines a sequence of images to form a moving picture. The video transmits a signal to a screen and processes the order in which the screen captures should be shown. Videos usually have audio components that correspond with the pictures being shown on the screen. Video compression technologies are about reducing and removing redundant video data so that a digital video file can be effectively sent over a network and stored on computer disks. With efficient compression techniques, a significant reduction in file size can be achieved with little or no adverse effect on the visual quality. The video quality, however, can be affected if the file size is further lowered by raising the compression level for a given compression technique. Security is about the protection of assets. Security, in information technology <a href="http://searchdatacenter.techtarget.com/definition/IT">(IT), </a>is the defense of digital information and IT assets against internal and external, malicious and accidental threats. This defense includes detection, prevention and response to threats through the use of <a href="http://searchsecurity.techtarget.com/definition/security-policy">security policies, </a>software tools and IT services. Security refers to protective digital privacy measures that are applied to prevent unauthorized access to computers, databases and websites. Cryptography is closely related to the disciplines of <a href="http://searchsecurity.techtarget.com/definition/cryptology">cryptology </a>and <a href="http://searchsecurity.techtarget.com/definition/cryptanalysis">cryptanalysis. </a>Cryptography includes techniques such as microdots, merging words with images, and other ways to hide information in storage or transit. However, in today's computer-centric world, cryptography is most often associated with scrambling <a href="http://searchsecurity.techtarget.com/definition/plaintext">plaintext </a>(ordinary text, sometimes referred to as clear text into <a href="http://searchcio-midmarket.techtarget.com/definition/ciphertext">cipher text </a>(a process called <a href="http://searchsecurity.techtarget.com/definition/encryption">encryption), </a>then back again (known as decryption). Cryptography is evergreen and developments. Cryptography protects users by providing functionality for the encryption of data and authentication of other users. Compression is the process of reducing the number of bits or bytes needed to represent a given set of data. It allows saving more data. The project aims to implement security algorithm for data security. The data will be first encrypted using security techniques and that are done at the same time then it takes less processing time and more speed compression techniques will applied. If encryption and compression are done at the same time then it takes less processing time and more speed.
APA, Harvard, Vancouver, ISO, and other styles
18

Tuithung, T., S. K. Ghosh, and Jayanta Mukherjee. "Motion Compensated JPEG2000 based video compression algorithms." International Journal of Signal and Imaging Systems Engineering 1, no. 3/4 (2008): 197. http://dx.doi.org/10.1504/ijsise.2008.026791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Olstad, Bjoern. "Adaptive temporal decimation for video compression algorithms." Journal of Electronic Imaging 2, no. 1 (1993): 5. http://dx.doi.org/10.1117/12.130194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sikora, T., and E. Viscito. "Compression algorithms for software coding of video." Signal Processing: Image Communication 8, no. 1 (1996): 1–2. http://dx.doi.org/10.1016/0923-5965(96)82990-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Deepti, K. "Video Compression with Diverse Contexts Using JPEG." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem45542.

Full text
Abstract:
Abstract—In the rapidly evolving digital landscape, efficient video compression is paramount for enhancing storage capacity and streaming performance without sacrificing quality. This project introduces an innovative approach to video compression, with well-established JPEG compression algorithm through a streamlined Python implementation to perform video compres- sion while maintaining impeccable visual fidelity. The standout feature of our Video Compression with Diverse Context (VC- DC) technique is its ability to significantly improve bitrate savings, achieving an impressive increase from 23.3 % to 49.03%. This breakthrough not only optimizes storage and transmission efficiency but also outperforms traditional video codecs, including advanced neural network-based methods like SOTA-HEM and DCVC-DC. Despite their complexity, these state-of-the-art tech- niques fall short in comparison to the simplicity and effectiveness of VC-DC. This project underscores the potential of combining classical compression algorithms with modern implementation strategies to push the boundaries of video compression, offering a compelling alternative to more resource-intensive solutions. Index Terms—Video Compression, Diverse Contexts, Bitrate Saving, Neural Networks
APA, Harvard, Vancouver, ISO, and other styles
22

González, Fernández Edgar, Orozco Ana Lucila Sandoval, and Villalba Luis Javier García. "Digital Video Manipulation Detection Technique Based on Compression Algorithms." IEEE Transactions on Intelligent Transportation Systems 23, no. 3 (2021): 2596–605. https://doi.org/10.1109/TITS.2021.3132227.

Full text
Abstract:
Digital images and videos play a very important role in everyday life. Nowadays, people have access the affordable mobile devices equipped with advanced integrated cameras and powerful image processing applications. Technological development facilitates not only the generation of multimedia content, but also the intentional modification of it, either with recreational or malicious purposes. This is where forensic techniques to detect manipulation of images and videos become essential. This paper proposes a forensic technique by analysing compression algorithms used by the H.264 coding. The presence of recompression uses information of macroblocks, a characteristic of the H.264-MPEG4 standard, and motion vectors. A Vector Support Machine is used to create the model that allows to accurately detect if a video has been recompressed.
APA, Harvard, Vancouver, ISO, and other styles
23

Chiman, Kwan, Larkin Jude, Budavari Bence, Shang Eric, and D. Tran Trac. "Perceptually Lossless Compression with Error Concealment for Periscope and Sonar Videos." Signal & Image Processing: An International Journal (SIPIJ) 10, February (2019): 1–14. https://doi.org/10.5281/zenodo.3187693.

Full text
Abstract:
We present a video compression framework that has two key features. First, we aim at achieving perceptually lossless compression for low frame rate videos (6 fps). Four well-known video codecs in the literature have been evaluated and the performance was assessed using four well-known performance metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels due to transmission errors in communication channels. Extensive experiments using actual videos have been performed to demonstrate the proposed framework
APA, Harvard, Vancouver, ISO, and other styles
24

Smirnov, Kirill, and Anastasia Mozhaeva. "ANALYSIS OF VIDEO DATA COMPRESSION ALGORITHMS AND OPTIMIZED FOR USE IN REMOTELY CONTROLLED DRONES." SYNCHROINFO JOURNAL 9, no. 2 (2023): 9–16. http://dx.doi.org/10.36724/2664-066x-2023-9-2-9-16.

Full text
Abstract:
Due to the widespread use of unmanned aerial vehicles (UAVs) in the civil sphere, there is a need to improve the video image generation quality and transmission technologies, changing the video streams coding to reduce their size and improve quality. Video streaming technologies are moving further towards complicating the video transmission systems used, which helps improve the received video data quality. Higher of transmitted video signal quality – more requirements for frequency band increase used. This fact leads us to use effective video compression algorithms that allow us to combine high image quality with a narrow bandwidth of frequencies used. In this paper, currently existing coding algorithms, their efficiency and computational complexity are studied. Among the algorithms under consideration there will only be those whose effectiveness has been proven by finding their application in modern realities. Based on the research, conclusions will be drawn regarding the feasibility of using each encoding algorithm for transmitting video data in real time.
APA, Harvard, Vancouver, ISO, and other styles
25

Basha, Sardar N., and A. Rajesh. "Scalable Video Coding Using Accordion Discrete Wavelet Transform and Tucker Decomposition for Multimedia Applications." Journal of Computational and Theoretical Nanoscience 16, no. 2 (2019): 601–8. http://dx.doi.org/10.1166/jctn.2019.7777.

Full text
Abstract:
The digital world demands the transmission and storage of high quality video for streaming and broadcasting applications, the constraints are the network bandwidth and the memory of devices for the various multimedia and scientific applications, the video consists of spatial and temporal redundancies. The objective of any video compression algorithm is to eliminate the redundant information from the video signal during compression for effective transmission and storage. The correlation between the successive frames has not been exploited enough by the current compression algorithms. In this paper, a novel method for video compression is presented. The proposed model, applies the transformation on set of group of pictures (GOP). The high spatial correlation is achieved from the spatial and temporal redundancy of GOP by accordion representation and this helps to bypass the computationally demanding motion compensation step. The core idea of the proposed technique is to apply Tucker Decomposition (TD) on the Discrete Wavelet Transform (DWT) coefficients of the Accordion model of the GOP. We use DWT to separate the video in to different sub-images and TD to efficiently compact the energy of sub-images. The blocking artifacts will be considerably eliminated as the block size is huge. The proposed method attempts to reduce the spatial and temporal redundancies of the video signal to improve the compression ratio, computation time, and PSNR. The experimental results prove that the proposed method is efficient especially in high bit rate and with slow motion videos.
APA, Harvard, Vancouver, ISO, and other styles
26

Fahmi, S. Sh, A. G. Davidchuk, and E. V. Kostikova. "New Lossless Compression Algorithms for Transport Images." INFORMACIONNYE TEHNOLOGII 27, no. 6 (2021): 299–305. http://dx.doi.org/10.17587/it.27.299-305.

Full text
Abstract:
The article considers the relevance of the development of lossless image compression and transmission algorithms and their application for creating transport video surveillance systems. A brief overview of lossless transport image compression methods is provided. We propose a method for compressing transport plots based on the pyramid-recursive method of splitting the source image into polygons of various shapes and sizes. We consider two new algorithms for implementing the proposed method that are fundamentally different from each other: with a transition to the spectral region and without a transition to the spectral region of the original signal to ensure lossless compression. The results of testing various well-known lossless compression algorithms are analyzed: series length, Huffman, and arithmetic encoding, and compared with the proposed algorithms. It is shown that the proposed algorithms are more efficient in terms of compression ratio (2—3 times) compared to the known ones, while the computational complexity increases approximately by more than 3-4 times.
APA, Harvard, Vancouver, ISO, and other styles
27

Nithya, P., T. Vengattaraman, and M. Sathya. "Survey On Parameters of Data Compression." REST Journal on Data Analytics and Artificial Intelligence 2, no. 1 (2023): 1–7. http://dx.doi.org/10.46632/jdaai/2/1/1.

Full text
Abstract:
The rapid development in the hardware and the software gives rise to data growth. This data growth has numerous impacts, including the need for a larger storage capacity for storing and transmitting. Data compression is needed in today’s world because it helps to minimize the amount of storage space required to store and transmit data. Performance measures in data compression are used to evaluate the efficiency and effectiveness of data compression algorithms. In recent times, numerous data compression algorithms are developed to reduce data storage and increase transmission speed in this internet era. In order to analyses how data compression performance is measured in terms of text, image, audio, and video compressions. This survey presents discussion made for important data compression parameters according to their data types.
APA, Harvard, Vancouver, ISO, and other styles
28

SANKARAGOMATHI, B., L. GANESAN, and S. ARUMUGAM. "ENCODING VIDEO SEQUENCES IN FRACTAL-BASED COMPRESSION." Fractals 15, no. 04 (2007): 365–78. http://dx.doi.org/10.1142/s0218348x0700371x.

Full text
Abstract:
With the rapid increase in the use of computers and the Internet, the demand for higher transmission and better storage is increasing as well. This paper describes the different techniques for data (image-video) compression in general and, in particular, the new compression technique called fractal image compression. Fractal image compression is based on self-similarity, where one part of an image is similar to the other part of the same image. Low bit rate color image sequence coding is very important for video transmission and storage applications. The most significant aspect of this work is the development of color images using fractal-based color image compression, since little work has been done previously in this area. The results obtained show that the fractal-based compression works for the color images works as well as for the gray-scale images. Nevertheless, the encoding of the color images takes more time than the gray-scale images. Color images are usually compressed in a luminance-chrominance coordinate space, with the compression performed independently for each coordinate by applying the monochrome image processing techniques. For image sequence compression, the design of an accurate and efficient algorithm for computing motion to exploit the temporal redundancy has been one of the most active research areas in computer vision and image compression. Pixel-based motion estimation algorithms address pixel correspondence directly by identifying a set of local features and computing a match between these features across the frames. These direct techniques share the common pitfall of high computation complexity resulting from the dense vector fields produced. For block matching motion estimation algorithms, the quad-tree data structure is frequently used in image coding to recursively decompose an image plane into four non-overlapping rectangular blocks.
APA, Harvard, Vancouver, ISO, and other styles
29

Goncalves, Paulo, Candido Moraes, Marcelo Porto, and Guilherme Correa. "Complexity-Aware TZS Algorithm for Mobile Video Encoders." Journal of Integrated Circuits and Systems 14, no. 3 (2019): 1–9. http://dx.doi.org/10.29292/jics.v14i3.60.

Full text
Abstract:
Video applications are significantly growing in the last years, especially in embedded/mobile systems. Modern video compression algorithms and standards, such as the High Efficiency Video Coding (HEVC), achieved a high efficiency in compression ratio. However, such efficiency has caused an augment on the complexity to encode videos. This is a serious problem especially in mobile systems, which present restrictions on processing and energy consumption. This paper presents an enhanced Test Zone Search (TZS) algorithm, aiming at complexity reduction of the Motion Estimation (ME) process in the HEVC standard and focusing efficient hardware design for mobile encoders. The proposed algorithm is composed of two strategies: an early termination scheme for TZS, called e-TZS, and the Octagonal-Axis Raster Search Pattern (OARP). When combined and implemented in the HEVC reference encoder, the strategies allowed an average complexity reduction of 75.16% in TZS, with a negligible BD-rate increase of only 0.1242% in comparison to the original algorithm. Besides, the approach presents an average block matching operation reduction of 80%, allowing hardware simplification and decreasing memory access.
APA, Harvard, Vancouver, ISO, and other styles
30

Селиверстов, Я. А., Н. Ю. Пышкина, Ш. С. Фахми, and Я. А. Хасан. "Systematization of algorithms for spectral processing of marine images." MORSKIE INTELLEKTUAL`NYE TEHNOLOGII)</msg>, no. 1(55) (March 3, 2022): 215–20. http://dx.doi.org/10.37220/mit.2022.55.1.029.

Full text
Abstract:
Сжатие изображений в настоящее время имеет важное значение для таких приложений, как передача и хранение информации при проектировании морских систем наблюдения и управления. Анализируются процессы спектрального сжатия изображений и типы избыточности информации. В этой статье рассмотрена проблема систематизации и выбора оптимального алгоритма сжатия изображений в зависимости от заданных требований по качеству передаваемой видеоинформации и пропускной способности канала связи. Предложена систематизация алгоритмов сжатия и восстановления морских изображений на основе спектральных преобразований исходного сигнала из пространственной области в частотную область: дискретное вейвлет преобразование, дискретное косинусное преобразование и пирамидально-рекурсивное преобразование. Разработана методика выбора эффективного алгоритма сжатия с учетом статистических характеристик сигнала изображений. Рассмотрены и показаны преимущества и недостатки различных алгоритмов сжатия морских изображений в оттенках серого и приведено экспериментальное сравнение алгоритмов сжатия различных морских сюжетов формата 256×256 и 512×512 и получены информационные показатели качества систем обработки видеоинформации. The Image compression is currently essential for applications such as transmission and storage in the design of marine surveillance and control systems. The processes of spectral image compression and types of information redundancy are analyzed. This article discusses the problem of systematization and selection of the optimal image compression algorithm depending on the specified requirements for the quality of transmitted video information and the bandwidth of the communication channel. A systematization of algorithms for compression and restoration of marine images based on spectral transformations of the original signal from the spatial domain to the frequency domain is proposed: discrete wavelet transform, discrete cosine transform and pyramidal-recursive transform. A method for selecting an effective compression algorithm is developed taking into account the statistical characteristics of the image signal. The advantages and disadvantages of various algorithms for compressing marine images in shades of gray are considered and shown, and an experimental comparison of compression algorithms for various marine scenes in 256×256 and 512×512 formats is given, and information quality indicators of video information processing systems are obtained.
APA, Harvard, Vancouver, ISO, and other styles
31

P, Madhavee Latha, and Annis Fathima A. "REVIEW ON IMAGE AND VIDEO COMPRESSION STANDARDS." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 373. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19760.

Full text
Abstract:
Nowadays, the number of photos taken each day is growing exponentially on phones and the number of photos uploading on Internet is also increasing rapidly. This explosion of photos in Internet and personal devices such as phones posed a challenge to the effective storage and transmission.Multimedia files are the files having text, images, audio, video, and animations, which are large and require lots of hard disk space. Hence, these files take more time to move from one place to another place over the Internet. Image compression is an effective way to reduce the storage space and speedup the transmission. Data compression is used everywhere on the internet, that is, the videos, the images, and the music in online. Even though many different image compression schemes exist, current needs and applications require fast compression algorithms which produce acceptable quality images or video with minimum size. In this paper, image and video compression standards are discussed.
APA, Harvard, Vancouver, ISO, and other styles
32

Padmanabhan, S. Anantha, and Krishna Kumar. "An Efficient Video Compression Encoder Based on Wavelet Lifting Scheme in LSK." Journal of Computational and Theoretical Nanoscience 13, no. 10 (2016): 7581–91. http://dx.doi.org/10.1166/jctn.2016.5756.

Full text
Abstract:
This paper presents a video compression system using wavelet lifting scheme. Video compression algorithms (“codecs”) manipulate video signals to dramatically reduce the storage and bandwidth required while maximizing the perceived video quality. There are four common methods for compression; discrete cosine transforms (DCT), vector quantization (VQ), fractal compression, and discrete wavelet transform (DWT). A gradient based motion estimation algorithm based on shapemotion prediction is used which takes advantage of the correlation between neighboring Binary Alpha Blocks (BABs), to match with the MPEG-4 shape coding case and speed up the estimation process. Then a non-redundant wavelet transform has been implemented as an iterative filter banks with down sampling operations. LSK operates without lists and is suitable for a fast, simple hardware implementation. Here the Set Partitioned Embedded bloCK coder (SPECK) image compression called Improved Listless SPECK (ILSPECK) is used. ILSPECK code a single zero to several insignificant subbands. This reduces the length of the output bit string as well as encoding/decoding time.
APA, Harvard, Vancouver, ISO, and other styles
33

Khanov, Alexander, Anastasija Shulzhenko, Anzhelika Voroshilova, Alexander Zubarev, Timur Karimov, and Shakeeb Fahmi. "Determining Thresholds for Optimal Adaptive Discrete Cosine Transformation." Algorithms 17, no. 8 (2024): 366. http://dx.doi.org/10.3390/a17080366.

Full text
Abstract:
The discrete cosine transform (DCT) is widely used for image and video compression. Lossy algorithms such as JPEG, WebP, BPG and many others are based on it. Multiple modifications of DCT have been developed to improve its performance. One of them is adaptive DCT (ADCT) designed to deal with heterogeneous image structure and it may be found, for example, in the HEVC video codec. Adaptivity means that the image is divided into an uneven grid of squares: smaller ones retain information about details better, while larger squares are efficient for homogeneous backgrounds. The practical use of adaptive DCT algorithms is complicated by the lack of optimal threshold search algorithms for image partitioning procedures. In this paper, we propose a novel method for optimal threshold search in ADCT using a metric based on tonal distribution. We define two thresholds: pm, the threshold defining solid mean coloring, and ps, defining the quadtree fragment splitting. In our algorithm, the values of these thresholds are calculated via polynomial functions of the tonal distribution of a particular image or fragment. The polynomial coefficients are determined using the dedicated optimization procedure on the dataset containing images from the specific domain, urban road scenes in our case. In the experimental part of the study, we show that ADCT allows a higher compression ratio compared to non-adaptive DCT at the same level of quality loss, up to 66% for acceptable quality. The proposed algorithm may be used directly for image compression, or as a core of video compression framework in traffic-demanding applications, such as urban video surveillance systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Peng, Hong, Meiqing Wang, and Choi-Hong Lai. "Design of parallel algorithms for fractal video compression." International Journal of Computer Mathematics 84, no. 2 (2007): 193–202. http://dx.doi.org/10.1080/00207160601168456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zanbouri, Ali Hossein, Yahia Hasan, and Fatameh Hossein. "Quality of Video Streaming: Taxonomy." WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS 22 (February 3, 2025): 215–33. https://doi.org/10.37394/23209.2025.22.19.

Full text
Abstract:
Real-time and live video streaming are very important topics in networks nowadays, either wired or wireless, and so it is significant to address and study the behavior, advantages, and disadvantages of different techniques and algorithms. This article presents a comprehensive overview for researchers who are willing to conduct research in video compression standards, error correction algorithms for improving the quality of video streaming, forward error correction codes with feedback and forward error correction algorithms with unequal loss (or error) protection techniques for enhancing video streaming quality, description about the layered video streaming, single and multi-paths video streaming, good description about video streaming over wireless networks, the problem of erasure packets and packet erasure networks/packet erasure Channels, layered coding compression techniques, error detecting and correcting algorithms, Unequal Error Protection (UEP) techniques and schemes, multipath video streaming, and recent researches that based on hybrid solutions over 3G, 4G, 5G, WiMAX, and Wi-Fi wireless networks.
APA, Harvard, Vancouver, ISO, and other styles
36

Noor, Noor, and Qusay Abboodi Ali. "A New Method for Intelligent Multimedia Compression Based on Discrete Hartley Matrix." Fusion: Practice and Applications 16, no. 2 (2024): 108–17. http://dx.doi.org/10.54216/fpa.160207.

Full text
Abstract:
Multimedia data (video, audio, images) require storage space and transmission bandwidth when sent through social media networking. Despite rapid advances in the capabilities of digital communication systems, the high data size and data transfer bandwidth continue to exceed the capabilities of available technology, especially among social media users. The recent growth of multimedia-based web applications such as WhatsApp, Telegram, and Messenger has created a need for more efficient ways to compress media data. This is because the transmission speed of networks for multimedia data is relatively slow. In addition, there is a specific size for sending files via email or social networks, because much high-definition multimedia information can reach the Giga Byte size. Moreover, most smart cameras have high imaging resolution, which increases the bit rate of multimedia files of video, audio, and image. Therefore, the goal of data compression is to represent media (video, audio, images, etc.) as accurately as possible with the minimum number of bits (bit rate). Traditional data compression methods are complex for users. They require a high processing power for media data. This shows that most of the existing algorithms have loss in data during the process of compressing and decompressing data, with a high bitrate for media data (video, audio, and image). Therefore, this work describes a new method for media compression systems by discrete Hartley matrix (128) to get a high speed and low bit rate for compressing multimedia data. Finally, the results show that the proposed algorithm has a high-performance speed with a low bit rate for compression data, without losing any part of data (video, sound, and image). Furthermore, the majority of users of social media are satisfied with the data compression interactive system, with high performance and effectiveness in compressing multimedia data. This, in turn, will make it easier for users to easily send their files of video, audio, and images via social media networks.
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Yongjian, Chang-Tsun Li, Yufei Wang, and Bei-bei Liu. "An Improved Fingerprinting Algorithm for Detection of Video Frame Duplication Forgery." International Journal of Digital Crime and Forensics 4, no. 3 (2012): 20–32. http://dx.doi.org/10.4018/jdcf.2012070102.

Full text
Abstract:
Frame duplication is a common way of digital video forgeries. State-of-the-art approaches of duplication detection usually suffer from heavy computational load. In this paper, the authors propose a new algorithm to detect duplicated frames based on video sub-sequence fingerprints. The fingerprints employed are extracted from the DCT coefficients of the temporally informative representative images (TIRIs) of the sub-sequences. Compared with other similar algorithms, this study focuses on improving fingerprints representing video sub-sequences and introducing a simple metric for the matching of video sub-sequences. Experimental results show that the proposed algorithm overall outperforms three related duplication forgery detection algorithms in terms of computational efficiency, detection accuracy and robustness against common video operations like compression and brightness change.
APA, Harvard, Vancouver, ISO, and other styles
38

Tang, Ning, Jin Cai, and Yuan Li. "An Enhanced Resolution Three-Dimensional Transformation Method Based on Discrete Wavelet Transform." Applied Mechanics and Materials 159 (March 2012): 41–45. http://dx.doi.org/10.4028/www.scientific.net/amm.159.41.

Full text
Abstract:
With the development of interactive multimedia technologies, image and video compression algorithms necessitated a number of better performance and functionality. Wavelet transform based embedded image coding method is the basis of JPEG2000. Lossy image compression algorithms sacrifice perfect image reconstruction in favor of decreased storage requirements. JPEG2000 algorithm has been developed based on the discrete wavelet transform (DWT) techniques, which have shown how the results achieved in different areas in information technology can be applied to enhance the performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Tang, Jun Fang. "Research on Information Applied Technology with Video Compression Algorithms Based on the Optimal Multi-Band Haar Wavelet Transform." Advanced Materials Research 886 (January 2014): 633–36. http://dx.doi.org/10.4028/www.scientific.net/amr.886.633.

Full text
Abstract:
Video playback has been one of the most important online communication ways. With the application of stereo video, large amount of video data need to be stored and transported so that fluency and clarity of demand system, and how to efficiently conduct compressed encoding for stereoscopic video data becomes a hot topic currently. In view of this problem, this paper puts forward the video-on-demand compression algorithm based on the optimal multi-band Haar wavelet transform, through the research on wavelet transform algorithm model to reinforce the algorithm secondly, strengthening from the binary wavelet theory into octal wavelet system theory to get better compression capability. The simulation experiments show that video-on-demand compression algorithm based on the optimal multi-band Haar wavelet transform proposed in this paper has a good compression performance not only under medium and high bit- rate conditions, and also reaches the H. 263 under low bit-rate condition.
APA, Harvard, Vancouver, ISO, and other styles
40

Silva, Giovane Gomes, Ícaro Gonçalves Siqueira, Mateus Grellert, and Claudio Machado Diniz. "Approximate Hardware Architecture for Interpolation Filter of Versatile Video Coding." Journal of Integrated Circuits and Systems 16, no. 2 (2021): 1–8. http://dx.doi.org/10.29292/jics.v16i2.327.

Full text
Abstract:
The new Versatile Video Coding (VVC) standard was recently developed to improve compression efficiency of previous video coding standards and to support new applications. This was achieved at the cost of an increase in the computational complexity of the encoder algorithms, which leads to the need to develop hardware accelerators and to apply approximate computing techniques to achieve the performance and power dissipation required for systems that encode video. This work proposes the implementation of an approximate hardware architecture for interpolation filters defined in the VVC standard targeting real-time processing of high resolution videos. The architecture is able to process up to 2560x1600 pixels videos at 30 fps with power dissipation of 23.9 mW when operating at a frequency of 522 MHz, with an average compression efficiency degradation of only 0.41% compared to default VVC video encoder software configuration.
APA, Harvard, Vancouver, ISO, and other styles
41

Suman, Srishty, Utkarsh Rastogi, and Rajat Tiwari. "Image Stitching Algorithms - A Review." Circulation in Computer Science 1, no. 2 (2016): 14–18. http://dx.doi.org/10.22632/ccs-2016-251-39.

Full text
Abstract:
Image stitching is the process of combining two or more images of the same scene as a single larger image. Image stitching is needed in many applications like video stabilization, video summarization, video compression, panorama creation. The effectiveness of image stitching depends on the overlap removal, matching of the intensity of images, the techniques used for blending the image. In this paper, the various techniques devised earlier for the image stitching and their applications in the relative places has been reviewed.
APA, Harvard, Vancouver, ISO, and other styles
42

Lyanguzov, A. A., and A. V. Korobeynikov. "Video Compression Performance Evaluation Method in Transmission via a Low-Speed Channel." Vestnik IzhGTU imeni M.T. Kalashnikova 25, no. 3 (2022): 74–81. http://dx.doi.org/10.22213/2413-1172-2022-3-74-81.

Full text
Abstract:
The paper presents an overview of the works devoted to video compression algorithms based on artificial intelligence methods. Two main directions for the development of such algorithms are identified - these are the development of modules for post-processing the results of the work of classical algorithms and the development of algorithms that completely replace existing video codecs. The problems of video compression algorithms performance evaluation criteria are considered. It was found that criteria based on the calculation of the standard deviation, such as PSNR, cannot be used to assess the quality of video data compression algorithms when operating under the influence of radio interference, due to the strong influence on the evaluation result of artifacts that inevitably arise as a result of the effects of this kind of interference. An alternative quality assessment method based on the semantic analysis of frames is proposed, which can be used to assess the noise immunity of video data transmission algorithms. The presented estimation method uses a frame-by-frame comparison of the original and restored video sequence, similar to PSNR. However, unlike the latter, it ignores artifacts that occur during exposure to radio interference, due to the use of semantic frame analysis, which consists in searching for objects in the image using approaches based on artificial intelligence. To compare the found objects on the original and reconstructed frames, an estimate is used based on the geometric position of the found objects in the image. After that, the data indicator is averaged for all frames of the video sequence in order to obtain the resulting similarity indicator of the restored video and the original.
APA, Harvard, Vancouver, ISO, and other styles
43

U., R. Padma, and Jayachitra J. "SELF-EMBEDDING VIDEO WATERMARKING USING DUAL ORTHOGONAL COMPLEX CONTOURLET TRANSFORM WITH AUTOCORRELATION SYSTEM." International Journal of Research - GRANTHAALAYAH 3, no. 4 (2017): 89–98. https://doi.org/10.5281/zenodo.883606.

Full text
Abstract:
This paper presents a novel non-blind watermarking algorithm using dual orthogonal complex contourlet transform. The dual orthogonal complex contourlet transform is preferred for watermarking because of its ability to capture the directional edges and contours superior to other transforms such as cosine transform, wavelet transform, etc. Digital image and video in their raw form require an enormous amount of storage capacity and the huge data systems also contain a lot of redundant information. Compression also increases the capacity of the communication channel. Image Compression using SPIHT Set Partitioning in Hierarchical Trees algorithm based on Huffman coding technique. SPIHT algorithm is the lossless compression algorithms reduce file size with no loss in image quality and comparing the final results in terms of bit error rate, PSNR and MSE.
APA, Harvard, Vancouver, ISO, and other styles
44

Akimov, V. A. "Remote technologies in Education. Compression algorithms and data formats for transmission of text, sound and video information." Izvestiya MGTU MAMI 7, no. 4-2 (2013): 352–55. http://dx.doi.org/10.17816/2074-0530-68284.

Full text
Abstract:
The article describes the features of the application of compression algorithms of information in multimedia learning technologies. There are given the examples of lossy compression algorithms and lossless compression and basic formats of compression of text, graphics and audiovisual information. The analysis of the effectiveness of various algorithms and recommendations for their use in different types of information are presented. The example of Huffman coding is given.
APA, Harvard, Vancouver, ISO, and other styles
45

ZHONG, JUNMEI, C. H. LEUNG, and Y. Y. TANG. "AN IMPROVED EMBEDDED ZEROTREE WAVELET IMAGE CODING METHOD BASED ON COEFFICIENT PARTITIONING USING MORPHOLOGICAL OPERATION." International Journal of Pattern Recognition and Artificial Intelligence 14, no. 06 (2000): 795–807. http://dx.doi.org/10.1142/s0218001400000490.

Full text
Abstract:
In recent years, wavelets have attracted great attention in both still image compression and video coding, and several novel wavelet-based image compression algorithms have been developed so far, one of which is Shapiro's embedded zerotree wavelet (EZW) image compression algorithm. However, there are still some deficiencies in this algorithm. In this paper, after the analysis of the deficiency in EZW, a new algorithm based on quantized coefficient partitioning using morphological operation is proposed. Instead of encoding the coefficients in each subband line-by-line, regions in which most of the quantized coefficients are significant are extracted by morphological dilation and encoded first. This is followed by using zerotrees to encode the remaining space which has mostly zeros. Experimental results show that the proposed algorithm is not only superior to the EZW, but also compares favorably with the most efficient wavelet-based image compression algorithms reported so far.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Ming, Chih-Cheng Hung, and Edward Jung. "Secure Information Delivery through High Bitrate Data Embedding within Digital Video and its Application to Audio/Video Synchronization." International Journal of Information Security and Privacy 6, no. 4 (2012): 71–93. http://dx.doi.org/10.4018/jisp.2012100104.

Full text
Abstract:
Secure communication has traditionally been ensured with data encryption, which has become easier to break than before due to the advancement of computing power. For this reason, information hiding techniques have emerged as an alternative to achieve secure communication. In this research, a novel information hiding methodology is proposed to deliver secure information with the transmission/broadcasting of digital video. Secure data will be embedded within the video frames through vector quantization. At the receiver end, the embedded information can be extracted without the presence of the original video contents. In this system, the major performance goals include visual transparency, high bitrate, and robustness to lossy compression. Based on the proposed methodology, the authors have developed a novel synchronization scheme, which ensures audio/video synchronization through speech-in-video techniques. Compared to existing algorithms, the main contributions of the proposed methodology are: (1) it achieves both high bitrate and robustness against lossy compression; (2) it has investigated impact of embedded information to the performance of video compression, which has not been addressed in previous research. The proposed algorithm is very useful in practical applications such as secure communication, captioning, speech-in-video, video-in-video, etc.
APA, Harvard, Vancouver, ISO, and other styles
47

Kulkarni, Nishad P., Aditya A. Patil, and Dr M. A. Gangarde. "Time-Domain Video Watermarking Algorithm for Data Security and Content Authentication." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 7485–95. http://dx.doi.org/10.22214/ijraset.2023.53489.

Full text
Abstract:
Abstract: Video watermarking is an essential method employed to invisibly and reliably incorporate information into digital videos. Its primary purposes include copyright protection, content authentication, and ownership verification. With the increasing prevalence of digital media and the ease of video sharing online, safeguarding intellectual property rights and ensuring video content integrity have become crucial. Video watermarking is an essential method employed to invisibly and reliably incorporate information into digital videos. Its primary purposes include copyright protection, content authentication, and ownership verification. With the increasing prevalence of digital media and the ease of video sharing online, safeguarding intellectual property rights and ensuring video content integrity have become crucial. Robustness is a crucial aspect of video watermarking, as watermarked videos may be subjected to various attacks or modifications. Robust video watermarking techniques employ error correction codes, spread spectrum modulation, or cryptographic algorithms to ensure the resilience of watermarks against attacks such as compression, filtering, and geometric transformations. Various aspects and performance characteristics must be taken into account while developing an algorithm. It is necessary to evaluate performance indicators like NCC, PSNR, BER, WER, SSIM, VIFP, VMAF, and many more.
APA, Harvard, Vancouver, ISO, and other styles
48

Oujezdský, Aleš. "Optimizing Video Clips in Educational Materials." International Journal of Information and Communication Technologies in Education 1, no. 2 (2012): 68–79. http://dx.doi.org/10.1515/ijicte-2012-0006.

Full text
Abstract:
Abstract The use of videos from digital camcorders has become a standard in education in recent years. The curriculum is easily accessible and appeals to a wider audience. The lessons use videos of various physical processes and chemical experiments. However there can be problems with this format. The video quality is often degraded in the final stage when the video is being prepared for placement in education. These include teaching materials in the form of web pages, elearning courses or flash multimedia objects. The final product of editing video from a digital camcorder is a DVD video. However, if we want to transfer this to the Web or other educational material, it is necessary to remove non-square pixels, interlaced video and choose the appropriate compression. For these operations, there are many interpolation algorithms (nearest neighbour, bilinear interpolation, bicubic interpolation), filter deinterlacing (wave, bob, blend), and compression tools. By selecting appropriate settings for these parameters, the video material can be optimized while maintaining the highest possible image quality. The final step before publishing the video is its conversion into one of the used codecs. Codec’s settings will largely impact the final quality and size of the video-clip.
APA, Harvard, Vancouver, ISO, and other styles
49

Gonzalez Fernandez, Edgar, Ana Lucila Sandoval Orozco, and Luis Javier Garcia Villalba. "Digital Video Manipulation Detection Technique Based on Compression Algorithms." IEEE Transactions on Intelligent Transportation Systems 23, no. 3 (2022): 2596–605. http://dx.doi.org/10.1109/tits.2021.3132227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cai, Xun, and Jae S. Lim. "Algorithms for Transform Selection in Multiple-Transform Video Compression." IEEE Transactions on Image Processing 22, no. 12 (2013): 5395–407. http://dx.doi.org/10.1109/tip.2013.2284073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!