Academic literature on the topic 'JPEG algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'JPEG algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "JPEG algorithm"

1

Zhu, Yongjun, Wenbo Liu, Qian Shen, Yin Wu, and Han Bao. "JPEG Lifting Algorithm Based on Adaptive Block Compressed Sensing." Mathematical Problems in Engineering 2020 (July 11, 2020): 1–17. http://dx.doi.org/10.1155/2020/2873830.

Full text
Abstract:
This paper proposes a JPEG lifting algorithm based on adaptive block compressed sensing (ABCS), which solves the fusion between the ABCS algorithm for 1-dimension vector data processing and the JPEG compression algorithm for 2-dimension image data processing and improves the compression rate of the same quality image in comparison with the existing JPEG-like image compression algorithms. Specifically, mean information entropy and multifeature saliency indexes are used to provide a basis for adaptive blocking and observing, respectively, joint model and curve fitting are adopted for bit rate control, and a noise analysis model is introduced to improve the antinoise capability of the current JPEG decoding algorithm. Experimental results show that the proposed method has good performance of fidelity and antinoise, especially at a medium compression ratio.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhan, Honghui. "Image compression and reconstruction based on GUI and Huffman coding." Journal of Physics: Conference Series 2580, no. 1 (2023): 012025. http://dx.doi.org/10.1088/1742-6596/2580/1/012025.

Full text
Abstract:
Abstract Huffman coding is an important part of image compression technology, the image compression platform is based on GUI, and Huffman is also widely used. This paper introduces the basic principle of the Huffman algorithm, compares it with arithmetic coding and run length encoding, and expounds on the application of these three algorithms in JPEG compression. The AC algorithm combined block-based, fine texture models and adaptive arithmetic coding in the given an example. The RLE algorithm used automatic threshold, direction judgment, and selective value counts to improve its compression efficiency. JPEG algorithm adopted an adaptive quantization table to reduce the distortion rate of image compression. This paper proves the possibility of a better compression rate and distortion rate of the hybrid compression algorithm by demonstrating the improved example of the basic image compression algorithm. In the future, the improved basic algorithms can be combined based on the original JPEG algorithm, and different algorithms can be integrated on the GUI to face different use environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Brysina, Iryna Victorivna, and Victor Olexandrovych Makarichev. "DISCRETE ATOMIC COMPRESSION OF DIGITAL IMAGES." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 4 (December 20, 2018): 17–33. http://dx.doi.org/10.32620/reks.2018.4.02.

Full text
Abstract:
The subject matter of this paper is the discrete atomic compression (DAC) of digital images, which is a lossy compression process based on the discrete atomic transform (DAT). The goal is to investigate the efficiency of the DAC algorithm. We solve the following tasks: to develop a general compression scheme using discrete atomic transform and to compare the results of DAC and JPEG algorithms. In this article, we use the methods of digital image processing, atomic function theory, and approximation theory. To compare the efficiency of DAC with the JPEG compression algorithm we use the sets of the classic test images and the classic aerial images. We analyze compression ratio (CR) and loss of quality, using uniform (U), root mean square (RMS) and peak signal to noise ratio (PSNR) metrics. DAC is an algorithm with flexible parameters. In this paper, we use “Optimal” and “Allowable” modes of this algorithm and compare them with the corresponding modes of JPEG. We obtain the following results: 1) DAC is much better than JPEG by the U-criterion of quality loss; 2) there are no significant differences between DAC and JPEG by RMS and PSNR criterions; 3) the compression ratio of DAC is much higher than the compression ratio of JPEG. In other words, the DAC algorithm saves more memory than the JPEG compression algorithm with not worse quality results. These results are due to the fundamental properties of atomic functions such as good approximation properties, the high order of smoothness and existence of locally supported basis in the spaces of atomic functions. Since generalized Fup-functions have the same convenient properties, it is clear that such compression results can be achieved by application of a generalized discrete atomic transform, which is based on these functions. We also discuss the obtained results in the terms of approximation theory and function theory. Conclusions: 1) it is possible to achieve better results with DAC than with JPEG; 2) application of DAC to image compression is more preferable than JPEG in the case when it is planned to use recognition algorithms; 3) further development and investigation of the DAC algorithm are promising
APA, Harvard, Vancouver, ISO, and other styles
4

M.K., Bouza. "Analysis and modification of graphic data compression algorithms." Artificial Intelligence 25, no. 4 (2020): 32–40. http://dx.doi.org/10.15407/jai2020.04.032.

Full text
Abstract:
The article examines the algorithms for JPEG and JPEG-2000 compression of various graphic images. The main steps of the operation of both algorithms are given, their advantages and disadvantages are noted. The main differences between JPEG and JPEG-2000 are analyzed. It is noted that the JPEG-2000 algorithm allows re-moving visually unpleasant effects. This makes it possible to highlight important areas of the image and improve the quality of their compression. The features of each step of the algorithms are considered and the difficulties of their implementation are compared. The effectiveness of each algorithm is demonstrated by the example of a full-color image of the BSU emblem. The obtained compression ratios were obtained and shown in the corresponding tables using both algorithms. Compression ratios are obtained for a wide range of quality values from 1 to ten. We studied various types of images: black and white, business graphics, indexed and full color. A modified LZW-Lempel-Ziv-Welch algorithm is presented, which is applicable to compress a variety of information from text to images. The modification is based on limiting the graphic file to 256 colors. This made it possible to index the color with one byte instead of three. The efficiency of this modification grows with increasing image sizes. The modified LZW-algorithm can be adapted to any image from single-color to full-color. The prepared tests were indexed to the required number of colors in the images using the FastStone Image Viewer program. For each image, seven copies were obtained, containing 4, 8, 16, 32, 64, 128 and 256 colors, respectively. Testing results showed that the modified version of the LZW algorithm allows for an average of twice the compression ratio. However, in a class of full-color images, both algorithms showed the same results. The developed modification of the LZW algorithm can be successfully applied in the field of site design, especially in the case of so-called flat design. The comparative characteristics of the basic and modified methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Marcelo, Alvin, Paul Fontelo, Miguel Farolan, and Hernani Cualing. "Effect of Image Compression on Telepathology." Archives of Pathology & Laboratory Medicine 124, no. 11 (2000): 1653–56. http://dx.doi.org/10.5858/2000-124-1653-eoicot.

Full text
Abstract:
Abstract Context.—For practitioners deploying store-and-forward telepathology systems, optimization methods such as image compression need to be studied. Objective.—To determine if Joint Photographic Expert Group (JPG or JPEG) compression, a lossy image compression algorithm, negatively affects the accuracy of diagnosis in telepathology. Design.—Double-blind, randomized, controlled trial. Setting.—University-based pathology departments. Participants.—Resident and staff pathologists at the University of Illinois, Chicago, and University of Cincinnati, Cincinnati, Ohio. Intervention.—Compression of raw images using the JPEG algorithm. Main Outcome Measures.—Image acceptability, accuracy of diagnosis, confidence level of pathologist, image quality. Results.—There was no statistically significant difference in the diagnostic accuracy between noncompressed (bit map) and compressed (JPG) images. There were also no differences in the acceptability, confidence level, and perception of image quality. Additionally, rater experience did not significantly correlate with degree of accuracy. Conclusions.—For providers practicing telepathology, JPG image compression does not negatively affect the accuracy and confidence level of diagnosis. The acceptability and quality of images were also not affected.
APA, Harvard, Vancouver, ISO, and other styles
6

Leger, Alain M. "JPEG still picture compression algorithm." Optical Engineering 30, no. 7 (1991): 947. http://dx.doi.org/10.1117/12.55896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Song, Hong Mei, Hai Wei Mu, and Dong Yan Zhao. "Study on Nearly Lossless Compression with Progressive Decoding." Advanced Materials Research 926-930 (May 2014): 1751–54. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.1751.

Full text
Abstract:
A progressive transmission and decoding nearly lossless compression algorithm is proposed. The image data are grouped according to different frequencies based on DCT transform, then it uses the JPEG-LS core algorithmtexture prediction and Golomb coding on each group of data, in order to achieve progressive image transmission and decoding. Experimentation on the standard test images with this algorithm and comparing with JPEG-LS shows that the compression ratio of this algorithm is very similar to the compression ratio of JPEG-LS, and this algorithm loses a little image information but it has the ability of the progressive transmission and decoding.
APA, Harvard, Vancouver, ISO, and other styles
8

Беляев, Н. Н., О. А. Бебенина, and В. Е. Бородкина. "DEVELOPEMENT AN ALGORITM FOR RECOGNIZING TEXT DATA IN DIGITAL GRAPHIC IMAGES." СИСТЕМЫ УПРАВЛЕНИЯ И ИНФОРМАЦИОННЫЕ ТЕХНОЛОГИИ, no. 2(84) (March 1, 2021): 75–78. http://dx.doi.org/10.36622/vstu.2021.84.2.016.

Full text
Abstract:
Предложен алгоритм распознавания, реализующий процедуры: обучения выбранных классификаторов и распознавания текстовых данных, учитывающие статистические характеристики распределения коэффициентов частотной области цифровых графических изображениях формата JPEG. The article presents an approach to development an algorithm for recognizing text data within JPEG format digital graphic images. Considered a hypothesis about influence text data content in JPEG digital graphic images on the distribution of values of the discrete cosine transformation coefficients in the frequency domain JPEG images of the format. Statistical classifiers models that provide a solution to the problem of recognition of text data in JPEG images based on analysis of its frequency domain have been determined. A recognition algorithm is proposed that implements the following procedures: training of selected classifiers and recognition of text data, taking into account the statistical characteristics of the distribution of frequency domain coefficients in JPEG format images.
APA, Harvard, Vancouver, ISO, and other styles
9

Saptariani, Trini, Sarifudin Madenda, Ernastuti Ernastuti, and Widya Silfianti. "Accelerating Compression Time of the standard JPEG by Employing The Quantized YCbCr Color Space Algorithm." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 6 (2018): 4343. http://dx.doi.org/10.11591/ijece.v8i6.pp4343-4351.

Full text
Abstract:
In this paper, we propose a quantized YCbCr color space (QYCbCr) technique which is employed in standard JPEG. The objective of this work is to accelerate computational time of the standard JPEG image compression algorithm. This is a development of the standard JPEG which is named QYCBCr algorithm. It merges two processes i.e., YCbCr color space conversion and Q quantization in which in the standar JPEG they were performed separately. The merger forms a new single integrated process of color conversion which is employed prior to DCT process by subsequently eliminating the quantization process. The equation formula of QYCbCr color coversion is built based on the chrominance and luminance properties of the human visual system which derived from quatization matrices. Experiment results performed on images of different sizes show that the computational running time of QYCbCr algorithm gives 4 up to 8 times faster than JPEG standard, and also provides higher compression ratio and better image quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Saptariani, Trini, Sarifudin Madenda, Ernastuti Ernastuti, and Widya Silfianti. "Accelerating Compression Time of the standard JPEG by Employing The Quantized YCbCr Color Space Algorithm." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 6 (2018): 4343–51. https://doi.org/10.11591/ijece.v8i6.pp4343-4351.

Full text
Abstract:
In this paper, we propose a quantized YCbCr color space (QYCbCr) technique which is employed in standard JPEG. The objective of this work is to accelerate computational time of the standard JPEG image compression algorithm. This is a development of the standard JPEG which is named QYCBCr algorithm. It merges two processes i.e., YCbCr color space conversion and Q quantization in which in the standar JPEG they were performed separately. The merger forms a new single integrated process of color conversion which is employed prior to DCT process by subsequently eliminating the quantization process. The equation formula of QYCbCr color coversion is built based on the chrominance and luminance properties of the human visual system which derived from quatization matrices. Experiment results performed on images of different sizes show that the computational running time of QYCbCr algorithm gives 4 up to 8 times faster than JPEG standard, and also provides higher compression ratio and better image quality.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "JPEG algorithm"

1

Gondlyala, Siddharth Rao. "Enhancing the JPEG Ghost Algorithm using Machine Learning." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20692.

Full text
Abstract:
Background: With the boom in the internet space and social media platforms, a large number of images are being shared. With this rise and advancements in technology, many image editing tools have made their way to giving rise to digital image manipulation. Being able to differentiate a forged image is vital to avoid misinformation or misrepresentation. This study focuses on the splicing image forgery to localizes the forged region in the tampered image. Objectives: The main purpose of the thesis is to extend the capability of the JPEG Ghost model by localizing the tampering in the image. This is done by analyzing the difference curves formed by compressions in the tampered image, and thereafter comparing the performance of the models. Methods: The study is carried out by two research methods; one being a Literature Review, whose main goal is gaining insights on the existing studies in terms of the approaches and techniques followed; and the second being Experiment; whose main goal is to improve the JPEG ghost algorithm by localizing the forged area in a tampered image and to compare three machine learning models based on the performance metrics. The machine learning models that are compared are Random Forest, XGBoost, and Support Vector Machine. Results: The performance of the above-mentioned models has been compared with each other on the same dataset. Results from the experiment showed that XGBoost had the best overall performance over other models with the Jaccard Index value of 79.8%. Conclusions: The research revolves around localization of the forged region in a tampered image using the concept of JPEG ghosts. This is We have concluded that the performance of XGBoost model is the best, followed by Random Forest and then Support Vector Machine.
APA, Harvard, Vancouver, ISO, and other styles
2

Chennupati, Om sai teja. "A structured approach to JPEG tampering detection using enhanced fusion algorithm." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Akdag, Sadik Bahaettin. "An Image Encryption Algorithm Robust To Post-encryption Bitrate Conversion." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607710/index.pdf.

Full text
Abstract:
In this study, a new method is proposed to protect JPEG still images through encryption by employing integer-to-integer transforms and frequency domain scrambling in DCT channels. Different from existing methods in the literature, the encrypted image can be further compressed, i.e. transcoded, after the encryption. The method provides selective encryption/security level with the adjustment of its parameters. The encryption method is tested with various images and compared with the methods in the literature in terms of scrambling performance, bandwidth expansion, key size and security. Furthermore this method is applied to the H.263 video sequences for the encryption of I-frames.
APA, Harvard, Vancouver, ISO, and other styles
4

Kadri, Imen. "Controlled estimation algorithms of disparity map using a compensation compression scheme for stereoscopic image coding." Thesis, Paris 13, 2020. http://www.theses.fr/2020PA131002.

Full text
Abstract:
Ces dernières années ont vu apparaître de nombreuses applications utilisant la technologie 3D tels que les écrans de télévisions 3D, les écrans auto-stéréoscopiques ou encore la visio-conférence stéréoscopique. Cependant ces applications nécessitent des techniques bien adaptées pour comprimer efficacement le volume important de données à transmettre ou à stocker. Les travaux développés dans cette thèse concernent le codage d’images stéréoscopiques et s’intéressent en particulier à l'amélioration de l'estimation de la carte de disparité dans un schéma de Compression avec Compensation de Disparité (CCD). Habituellement, l'algorithme d’appariement de blocs similaires dans les deux vues permet d’estimer la carte de disparité en cherchant à minimiser l’erreur quadratique moyenne entre la vue originale et sa version reconstruite sans compensation de disparité. L’erreur de reconstruction est ensuite codée puis décodée afin d’affiner (compenser) la vue prédite. Pour améliorer la qualité de la vue reconstruite, dans un schéma de codage par CCD, nous avons prouvé que le concept de sélectionner la disparité en fonction de l'image compensée plutôt que de l'image prédite donne de meilleurs résultats. En effet, les simulations montrent que notre algorithme non seulement réduit la redondance inter-vue mais également améliore la qualité de la vue reconstruite et compensée par rapport à la méthode habituelle de codage avec compensation de disparité. Cependant, cet algorithme de codage requiert une grande complexité de calculs. Pour remédier à ce problème, une modélisation simplifiée de la manière dont le codeur JPEG (à savoir la quantification des composantes DCT) impacte la qualité de l’information codée est proposée. En effet, cette modélisation a permis non seulement de réduire la complexité de calculs mais également d’améliorer la qualité de l’image stéréoscopique décodée dans un contexte CCD. Dans la dernière partie, une métrique minimisant conjointement la distorsion et le débit binaire est proposée pour estimer la carte de disparité en combinant deux algorithmes de codage d’images stéréoscopiques dans un schéma CCD<br>Nowadays, 3D technology is of ever growing demand because stereoscopic imagingcreate an immersion sensation. However, the price of this realistic representation is thedoubling of information needed for storage or transmission purpose compared to 2Dimage because a stereoscopic pair results from the generation of two views of the samescene. This thesis focused on stereoscopic image coding and in particular improving thedisparity map estimation when using the Disparity Compensated Compression (DCC)scheme.Classically, when using Block Matching algorithm with the DCC, a disparity mapis estimated between the left image and the right one. A predicted image is thencomputed.The difference between the original right view and its prediction is called theresidual error. This latter, after encoding and decoding, is injected to reconstruct theright view by compensation (i.e. refinement) . Our first developed algorithm takes intoaccount this refinement to estimate the disparity map. This gives a proof of conceptshowing that selecting disparity according to the compensated image instead of thepredicted one is more efficient. But this done at the expense of an increased numericalcomplexity. To deal with this shortcoming, a simplified modelling of how the JPEGcoder, exploiting the quantization of the DCT components, used for the residual erroryields with the compensation is proposed. In the last part, to select the disparity mapminimizing a joint bitrate-distortion metric is proposed. It is based on the bitrateneeded for encoding the disparity map and the distortion of the predicted view.This isby combining two existing stereoscopic image coding algorithms
APA, Harvard, Vancouver, ISO, and other styles
5

Fawcett, Roger James. "Efficient practical image compression." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thom, Gary A., and Alan R. Deutermann. "A COMPARISON OF VIDEO COMPRESSION ALGORITHMS." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608290.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California<br>Compressed video is necessary for a variety of telemetry requirements. A large number of competing video compression algorithms exist. This paper compares the ability of these algorithms to meet criteria which are of interest for telemetry applications. Included are: quality, compression, noise susceptibility, motion performance and latency. The algorithms are divided into those which employ inter-frame compression and those which employ intra-frame compression. A video tape presentation will also be presented to illustrate the performance of the video compression algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Grecos, Christos. "Low cost algorithms for image/video coding and rate control." Thesis, University of South Wales, 2001. https://pure.southwales.ac.uk/en/studentthesis/low-cost-algorithms-for-imagevideo-coding-and-rate-control(40ae7449-3372-4f21-aaec-91ad339907e9).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Samuel, Sindhu. "Digital rights management (DRM) : watermark encoding scheme for JPEG images." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-09122008-182920/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brunet, Dominique. "Métriques perceptuelles pour la compression d'images. Étude et comparaison des algorithmes JPEG et JPEG2000." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/25159/25159.pdf.

Full text
Abstract:
Les algorithmes de compression d'images JPEG et JPEG2000 sont présentés, puis comparés grâce à  une métrique perceptuelle. L'algorithme JPEG décompose une image par la transformée en cosinus discrète, approxime les coefficients transformés par une quantisation uniforme et encode le résultat par l'algorithme de Huffman. Pour l'algorithme JPEG2000, on utilise une transformée en ondelettes décomposant une image en plusieurs résolutions. On décrit et justifie la construction d'ondelettes orthogonales ou biorthogonales ayant le maximum de propriétés parmi les suivantes: valeurs réelles, support compact, plusieurs moments, régularité et symétrie. Ensuite, on explique sommairement le fonctionnement de l'algorithme JPEG2000, puis on montre que la métrique RMSE n'est pas bonne pour mesurer l'erreur perceptuelle. On présente donc quelques idées pour la construction d'une métrique perceptuelle se basant sur le fonctionnement du système de vision humain, décrivant en particulier la métrique SSIM. On utilise finalement cette dernière métrique pour conclure que JPEG2000 fait mieux que JPEG.<br>In the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
APA, Harvard, Vancouver, ISO, and other styles
10

Brunet, Dominique. "Métriques perceptuelles pour la compression d'images : éude et comparaison des algorithmes JPEG et JPEG2000." Master's thesis, Université Laval, 2007. http://hdl.handle.net/20.500.11794/19752.

Full text
Abstract:
Les algorithmes de compression d'images JPEG et JPEG2000 sont présentés, puis comparés grâce à une métrique perceptuelle. L'algorithme JPEG décompose une image par la transformée en cosinus discrète, approxime les coefficients transformés par une quantisation uniforme et encode le résultat par l'algorithme de Huffman. Pour l'algorithme JPEG2000, on utilise une transformée en ondelettes décomposant une image en plusieurs résolutions. On décrit et justifie la construction d'ondelettes orthogonales ou biorthogonales ayant le maximum de propriétés parmi les suivantes: valeurs réelles, support compact, plusieurs moments, régularité et symétrie. Ensuite, on explique sommairement le fonctionnement de l'algorithme JPEG2000, puis on montre que la métrique RMSE n'est pas bonne pour mesurer l'erreur perceptuelle. On présente donc quelques idées pour la construction d'une métrique perceptuelle se basant sur le fonctionnement du système de vision humain, décrivant en particulier la métrique SSIM. On utilise finalement cette dernière métrique pour conclure que JPEG2000 fait mieux que JPEG.<br>In the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "JPEG algorithm"

1

L, Mitchell Joan, ed. JPEG still image data compression standard. Van Nostrand Reinhold, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Acharya, Tinku. JPEG2000 standard for image compression: Concepts, algorithms and VLSI architectures. Wiley-Interscience, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Acharya, Tinku, and Ping-Sing Tsai. JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures. Wiley-Interscience, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Acharya, Tinku, and Ping-Sing Tsai. JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures. Wiley & Sons, Incorporated, John, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "JPEG algorithm"

1

Liu, Hongmei, Huiying Fu, and Jiwu Huang. "A Watermarking Algorithm for JPEG File." In Advances in Multimedia Information Processing - PCM 2006. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11922162_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fridrich, Jessica, Miroslav Goljan, and Dorin Hogea. "Steganalysis of JPEG Images: Breaking the F5 Algorithm." In Information Hiding. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36415-3_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Bin, Yang Xin, Xinxin Niu, Kaiguo Yuan, and Zhang Bin. "An Anti-JPEG Compression Image Perceptual Hashing Algorithm." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23220-6_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Furht, Borko, Stephen W. Smoliar, and HongJiang Zhang. "JPEG Algorithm for Full-Color Still Image Compression." In Video and Image Processing in Multimedia Systems. Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2277-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jian, Huanhuan Zhao, Bin Ma, et al. "High-Quality PRNU Anonymous Algorithm for JPEG Images." In Digital Forensics and Watermarking. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2585-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tuba, Eva, Milan Tuba, Dana Simian, and Raka Jovanovic. "JPEG Quantization Table Optimization by Guided Fireworks Algorithm." In Lecture Notes in Computer Science. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59108-7_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bevinakoppa, Savitri. "Implementation of the JPEG Algorithm on Three Parallel Computers." In Still Image Compression on Parallel Computer Architectures. Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-4967-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rong, Ma, Yao Gaohua, and Guo Hui. "Color Image Fast Encryption Algorithm Based on JPEG Encoding." In Machine Learning and Intelligent Communications. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04409-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shang, Yuanyuan, Huizhuo Niu, Sen Ma, Xuefeng Hou, and Chuan Chen. "Design and Implementation for JPEG-LS Algorithm Based on FPGA." In Computing and Intelligent Systems. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24091-1_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kakollu, Vanitha, G. Narsimha, and P. Chandrasekhar Reddy. "Fuzzy C-Means-Based JPEG Algorithm for Still Image Compression." In Smart Intelligent Computing and Applications. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1921-1_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "JPEG algorithm"

1

Siam, Abdullah Al, Md Maruf Hassan, and Touhid Bhuiyan. "Secure Medical Imaging: A DICOM to JPEG 2000 Conversion Algorithm with Integrated Encryption." In 2025 IEEE 4th International Conference on AI in Cybersecurity (ICAIC). IEEE, 2025. https://doi.org/10.1109/icaic63015.2025.10848861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Viraktamath, S. V., and G. V. Attimarad. "Performance analysis of JPEG algorithm." In 2011 International Conference on Signal Processing, Communication, Computing and Networking Technologies (ICSCCN). IEEE, 2011. http://dx.doi.org/10.1109/icsccn.2011.6024627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bagbaba, Ahmet Cagri, Berna Ors, Osman Semih Kayhan, and Ahmet Turan Erozan. "JPEG image Encryption via TEA algorithm." In 2015 23th Signal Processing and Communications Applications Conference (SIU). IEEE, 2015. http://dx.doi.org/10.1109/siu.2015.7130282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Amashi, Radhika, Vishwanath P. Baligar, and Priyadarshini Kalwad. "Experimental study on JPEG-LS algorithm." In 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI). IEEE, 2017. http://dx.doi.org/10.1109/icpcsi.2017.8391965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Baosheng, Daofu Gong, and Fenlin Liu. "A Robust Watermarking Algorithm For JPEG Images." In 2017 2nd Joint International Information Technology, Mechanical and Electronic Engineering Conference (JIMEC 2017). Atlantis Press, 2017. http://dx.doi.org/10.2991/jimec-17.2017.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Xiaowei, Fenlin Liu, Daofu Gong, and Jing Jing. "An Authentication Watermark Algorithm for JPEG images." In 2009 International Conference on Availability, Reliability and Security. IEEE, 2009. http://dx.doi.org/10.1109/ares.2009.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Taekon, Hyun M. Kim, Ping-sing Tsai, and Tinku Acharya. "Rate-distortion optimization algorithm for JPEG 2000." In International Symposium on Optical Science and Technology, edited by Mark S. Schmalz. SPIE, 2003. http://dx.doi.org/10.1117/12.451258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yan-Qing, and Gui-Lian Su. "A Steganalytic Algorithm Aiming at JPEG Image." In 2009 Second International Conference on Information and Computing Science. IEEE, 2009. http://dx.doi.org/10.1109/icic.2009.163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aravind, R., G. L. Cash, and J. P. Worth. "On Implementing The Jpeg Still-Picture Compression Algorithm." In 1989 Symposium on Visual Communications, Image Processing, and Intelligent Robotics Systems, edited by William A. Pearlman. SPIE, 1989. http://dx.doi.org/10.1117/12.970090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yunxiang, Long, Hao Huang, and Jian Zheng. "Research on a JPEG Digital Image Encryption Algorithm." In 2020 IEEE 6th International Conference on Computer and Communications (ICCC). IEEE, 2020. http://dx.doi.org/10.1109/iccc51575.2020.9345233.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "JPEG algorithm"

1

Allen, Christopher I. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components. Defense Technical Information Center, 2015. http://dx.doi.org/10.21236/ad1002538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Libert, John M., Shahram Orandi, and John D. Grantham. Comparison of the WSQ and JPEG 2000 image compression algorithms on 500 ppi fingerprint imagery. National Institute of Standards and Technology, 2012. http://dx.doi.org/10.6028/nist.ir.7781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!