To see the other types of publications on this topic, follow the link: JPEG 2000 Compression.

Journal articles on the topic 'JPEG 2000 Compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'JPEG 2000 Compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Skog, Kasper, Tomáš Kohout, Tomáš Kašpárek, Antti Penttilä, Monika Wolfmayr, and Jaan Praks. "Lossless Hyperspectral Image Compression in Comet Interceptor and Hera Missions with Restricted Bandwith." Remote Sensing 17, no. 5 (2025): 899. https://doi.org/10.3390/rs17050899.

Full text
Abstract:
Lossless image compression is vital for missions with limited data transmission bandwidth. Reducing file sizes enables faster transmission and increased scientific gains from transient events. This study compares two wavelet-based image compression algorithms, CCSDS 122.0 and JPEG 2000, used in the European Space Agency Comet Interceptor and Hera missions, respectively, in varying scenarios. The JPEG 2000 implementation is sourced from the JasPer library, whereas a custom implementation was written for CCSDS 122.0. The performance analysis for both algorithms consists of compressing simulated asteroid images in the visible and near-infrared spectral ranges. In addition, all test images were noise-filtered to study the effect of the amount of noise on both compression ratio and speed. The study finds that JPEG 2000 achieves consistently higher compression ratios and benefits from decreased noise more than CCSDS 122.0. However, CCSDS 122.0 produces comparable results faster than JPEG 2000 and is substantially less computationally complex. On the contrary, JPEG 2000 allows dynamic (entropy-permitting) reduction in the bit depth of internal data structures to 8 bits, halving the memory allocation, while CCSDS 122.0 always works in 16-bit mode. These results contribute valuable knowledge to the behavioral characteristics of both algorithms and provide insight for entities planning on using either algorithm on board planetary missions.
APA, Harvard, Vancouver, ISO, and other styles
2

Zabala, Alaitz, Raffaele Vitulli, and Xavier Pons. "Impact of CCSDS-IDC and JPEG 2000 Compression on Image Quality and Classification." Journal of Electrical and Computer Engineering 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/761067.

Full text
Abstract:
This study measures the impact of both on-board and user-side lossy image compression (CCSDS-IDC and JPEG 2000) on image quality and classification. The Sentinel-2 Image Performance Simulator was modified to include these compression algorithms in order to produce Sentinel-2 simulated images with on-board lossy compression. A multitemporal set of Landsat images was used for the user-side compression scenario in order to study a crop area. The performance of several compressors was evaluated by computing the Signal-to-Noise Ratio (SNR) of the compressed images. The overall accuracy of land-cover classifications of these images was also evaluated. The results show that on-board CCSDS performs better than JPEG 2000 in terms of compression fidelity, especially at lower compression ratios (from CR 2:1 up to CR 4:1, i.e., 8 to 4 bpppb). The effect of compression on land cover classification follows the same trends, but compression fidelity may not be enough to assess the impact of compression on end-user applications. If compression is applied by end-users, the results show that 3D-JPEG 2000 obtains higher compression fidelity than CCSDS and JPEG 2000 with other parameterizations. This is due to the high dynamic range of the images (representing reflectances*10000), which JPEG 2000 is able to exploit better.
APA, Harvard, Vancouver, ISO, and other styles
3

OH, TICK HUI, and ROSLI BESAR. "MEDICAL IMAGE COMPRESSION USING JPEG-2000 AND JPEG: A COMPARISON STUDY." Journal of Mechanics in Medicine and Biology 02, no. 03n04 (2002): 313–28. http://dx.doi.org/10.1142/s021951940200054x.

Full text
Abstract:
Due to the constrained bandwidth and storage capacity, medical images must be compressed before transmission and storage. However, compression will reduce image fidelity, especially when the image is compressed of lower bit rate, which cannot be tolerated in medical field. In this paper, the compression performance of the new JPEG-2000 and the more conventional JPEG is studied. The parameters used for comparison include the compression efficiency, peak signal-to-noise ratio (PSNR), picture quality scale (PQS), and mean opinion score (MOS). Three types of medical images are used — X-ray, magnetic resonance imaging (MRI) and ultrasound. Overall, the study shows that JPEG-2000 compression is more acceptable and superior compare to JPEG for lossy compression.
APA, Harvard, Vancouver, ISO, and other styles
4

Bernstein, Herbert J., Alexei Soares, Kimberly Horvat, and Jean Jakoncic. "Massive Compression for High Data Rate Macromolecular Crystallography (HDRMX): Impact on Diffraction Data and Subsequent Structural Analysis." Structural Dynamics 12, no. 2_Supplement (2025): A147. https://doi.org/10.1063/4.0000456.

Full text
Abstract:
New higher-count-rate, integrating, large area X-ray detectors with framing rates as high as 17,400 images per second are beginning to be available. Data from these detectors are always compressed losslessly, but systems may not keep up with these data rates, and the files may still be larger than seems necessary. We propose that such MX experiments will require lossy compression algorithms to keep up with data throughput and capacity for long-term storage, but note that some information may be lost. Indeed, one might employ dramatic lossy compression only for archiving of data after structures are solved and published. Can we minimize this loss with acceptable impact on structural information? To explore this question, we have considered several approaches: summing short sequences of images to reverse fine phi-slicing, binning to create the effect of larger pixels, use of JPEG-2000 lossy wavelet-based compression from the movie industry, and the use of Hcompress which is a Haar-wavelet-based lossy compression from astronomy. In each of these last two methods one can specify approximately how much one wants the result to be compressed from the starting-file size. We have also explored the effect of combinations of summing and binning with Hcompress or JPEG-2000. These provide particularly effective lossy compressions that retain essential information for structure solution from coherent Bragg reflections. There are many lossy compressions to consider. For this work we have experimented with lossy compression of less than 10 to over 50,000. Caution is needed to avoid over-compression that loses significant structural data and damages the quality of peak integrations. The combined use of modest degrees of binning and summing actually strengthens the weak reflections, which helps to protect them from the impact of the massive compressions provided by Hcompress or JPEG-2000. See the figures below for the impact of JPEG-2000 and Hcompress by themselves and in combination with binning and summing by two. The bottom half of the first figure shows the impact of JPEG-2000 compressions on weak reflections. J2k200 keeps the weak peaks findable but distorts the shoulders. The higher JPEG-2000 compressions do serious damage, losing the peak entirely for j2k1000. The top half of the first figure shows that Hcompress does better but also loses the peak entirely for high compressions. In the second figure, the weak peak has been protected by modest binning and summing allowing useful overall compression ratios well over 500 to 1 to be achieved in many cases and over 1000 to 1 in some cases. The lesson we have learned is that with care to protect weak peaks with modest binning and summing, and with care to avoid over-compression, in many cases, massive lossy compression can be applied successfully while retaining the information necessary to find and integrate Bragg reflections and to solve structures with good statistics.
APA, Harvard, Vancouver, ISO, and other styles
5

Marcelo, Alvin, Paul Fontelo, Miguel Farolan, and Hernani Cualing. "Effect of Image Compression on Telepathology." Archives of Pathology & Laboratory Medicine 124, no. 11 (2000): 1653–56. http://dx.doi.org/10.5858/2000-124-1653-eoicot.

Full text
Abstract:
Abstract Context.—For practitioners deploying store-and-forward telepathology systems, optimization methods such as image compression need to be studied. Objective.—To determine if Joint Photographic Expert Group (JPG or JPEG) compression, a lossy image compression algorithm, negatively affects the accuracy of diagnosis in telepathology. Design.—Double-blind, randomized, controlled trial. Setting.—University-based pathology departments. Participants.—Resident and staff pathologists at the University of Illinois, Chicago, and University of Cincinnati, Cincinnati, Ohio. Intervention.—Compression of raw images using the JPEG algorithm. Main Outcome Measures.—Image acceptability, accuracy of diagnosis, confidence level of pathologist, image quality. Results.—There was no statistically significant difference in the diagnostic accuracy between noncompressed (bit map) and compressed (JPG) images. There were also no differences in the acceptability, confidence level, and perception of image quality. Additionally, rater experience did not significantly correlate with degree of accuracy. Conclusions.—For providers practicing telepathology, JPG image compression does not negatively affect the accuracy and confidence level of diagnosis. The acceptability and quality of images were also not affected.
APA, Harvard, Vancouver, ISO, and other styles
6

Barina, David, and Ondrej Klima. "JPEG 2000: guide for digital libraries." Digital Library Perspectives 36, no. 3 (2020): 249–63. http://dx.doi.org/10.1108/dlp-03-2020-0014.

Full text
Abstract:
Purpose The joint photographic experts group (JPEG) 2000 image compression system is being used for cultural heritage preservation. The authors are aware of over a dozen of big memory institutions worldwide using this format. This paper aims to review and explain choices for end users to help resolve trade-offs that these users are likely to encounter in practice. Design/methodology/approach The JPEG 2000 format is quite complex and therefore sometimes considered as a preservation risk. A lossy compression is governed by a number of parameters that control compression speed and rate-distortion trade-off. Their inappropriate adjustment may fairly easily lead to sub-optimal compression performance. This paper provides general guidelines for selecting the most appropriate parameters for a specific application. Findings This paper serves as a guide for the preservation of digital heritage in cultural heritage institutions, including libraries, archives and museums. Originality/value This paper serves as a guide for the preservation of digital heritage in cultural heritage institutions, including libraries, archives and museums.
APA, Harvard, Vancouver, ISO, and other styles
7

Horrigue, Layla, Refka Ghodhbani, Albia Maqbool, et al. "Efficient Hardware Accelerator and Implementation of JPEG 2000 MQ Decoder Architecture." Engineering, Technology & Applied Science Research 14, no. 2 (2024): 13463–69. http://dx.doi.org/10.48084/etasr.7065.

Full text
Abstract:
Due to the extensive use of multimedia technologies, there is a pressing need for advancements and enhanced efficiency in picture compression. JPEG 2000 standard aims to meet the needs for encoding still pictures. JPEG 2000 is an internationally recognized standard for compressing still images. It provides a wide range of features and offers superior compression ratios and interesting possibilities when compared to traditional JPEG approaches. Nevertheless, the MQ decoder in the JPEG 2000 standard presents a substantial obstacle for real-time applications. In order to fulfill the demands of real-time processing, it is imperative to meticulously devise a high-speed MQ decoder architecture. This work presents a novel MQ decoder architecture that is both high-speed and area-efficient, making it comparable to previous designs and well-suited for chip implementation. The design is implemented using the VHDL hardware description language and is synthesized with Xilinx ISE 14.7 and Vivado 2015.1. The implementation findings show that the design functions at a frequency of 438.5 MHz on Virtex-6 and 757.5 MHz on Zync7000. For these particular frequencies, the calculated frame rate is 63.1 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
8

KULKARNI, SHIVALI D., AMEYA K. NAIK, and NITIN S. NAGORI. "2D IMAGE TRANSMISSION USING BANDWIDTH EFFICIENT MAPPING TECHNIQUE." International Journal of Image and Graphics 10, no. 04 (2010): 559–73. http://dx.doi.org/10.1142/s0219467810003883.

Full text
Abstract:
Transmitting images over the wireless channel require that optimum compression ratio be maintained along with good image quality. This becomes a critical issue especially at higher values of bit error rate (BER). Joint photographic experts group (JPEG) standard and its successor JPEG 2000 provide excellent compression ratio but image reconstruction becomes highly difficult under extreme noise conditions. We present a mapping technique which gives better compression as compared to the existing techniques. Also, its performance is excellent even for higher bit error rates (BERs). This is supported by the results presented for JPEG, JPEG 2000, as well as mapping technique under fading channel conditions. Moreover, it is observed that the presence of high levels of noise has negligible effect on the reconstruction of images encoded using mapping technique.
APA, Harvard, Vancouver, ISO, and other styles
9

Žalik, Borut, Damjan Strnad, Štefan Kohek, et al. "FLoCIC: A Few Lines of Code for Raster Image Compression." Entropy 25, no. 3 (2023): 533. http://dx.doi.org/10.3390/e25030533.

Full text
Abstract:
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes.
APA, Harvard, Vancouver, ISO, and other styles
10

Skodras, A., C. Christopoulos, and T. Ebrahimi. "The JPEG 2000 still image compression standard." IEEE Signal Processing Magazine 18, no. 5 (2001): 36–58. http://dx.doi.org/10.1109/79.952804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chiman, Kwan, Larkin Jude, Budavari Bence, and Chou Bryan. "Compression Algorithm Selection for Multispectral Mastcam Images." Signal & Image Processing: An International Journal (SIPIJ) 10, no. 1 (2019): 1–14. https://doi.org/10.5281/zenodo.2620945.

Full text
Abstract:
ABSTRACT The two mast cameras (Mastcam) onboard the Mars rover, Curiosity, are multispectral imagers with nine bands in each camera. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times compression. We present a two-step approach to compressing multispectral Mastcam images. First, we propose to apply principal component analysis (PCA) to compress the nine bands into three or six bands. This step optimally compresses the 9-band images through spectral correlation between the bands. Second, several well-known image compression codecs, such as JPEG, JPEG-2000 (J2K), X264, and X265, in the literature are applied to compress the 3-band or 6-band images coming out of PCA. The performance of dif erent algorithms was assessed using four well-known performance metrics. Extensive experiments using actual Mastcam images have been performed to demonstrate the proposed framework. We observed that perceptually lossless compression can be achieved at a 10:1 compression ratio. In particular, the performance gain of an approach using a combination of PCA and X265 is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at a 10:1 compression ratio over that of JPEG when using our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Hussain, S. K., and G. Raja. "A JPEG 2000 BASED HYBRID IMAGE COMPRESSION TECHNIQUE FOR MEDICAL IMAGES." Nucleus 48, no. 4 (2011): 287–93. https://doi.org/10.71330/thenucleus.2011.822.

Full text
Abstract:
Use of lossy compression for medical images could result in compression error that may be considered as diagnostic problem by medical doctor. Hybrid schemes, a combination of lossy and lossless compression are used to achieve higher compression ratio without compromising the subjective quality of medical images. This paper proposes a new hybrid compression method for medical images. Different combinations of lossy and lossless compression schemes: RLE, LZW, JPEG LS, JPEG and JPEG2000 are implemented to find out the best hybrid compression combination by keeping subjective quality of medical image as a benchmark. X-ray images are used for experimentation. Experimental results show that hybrid combination of JPEG2000 lossless and lossy JPEG2000 produce optimized results without compromising subject quality of medical images required for diagnostics. The proposed hybrid combination has average compression ratio, space saving, MSE and PSNR of 0.21, 78.97, 1.16 and 47.58 respectively for all the medical images used in experimentation. The proposed hybrid scheme can be used for medical image compression.
APA, Harvard, Vancouver, ISO, and other styles
13

Brahimi, Tahar, Fouad Khelifi, and Abdellah Kacha. "An efficient JPEG-2000 based multimodal compression scheme." Multimedia Tools and Applications 80, no. 14 (2021): 21241–60. http://dx.doi.org/10.1007/s11042-021-10776-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jóźków, Grzegorz. "Terrestrial Laser Scanning Data Compression Using JPEG-2000." PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 85, no. 5 (2017): 293–305. http://dx.doi.org/10.1007/s41064-017-0027-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Steingrímsson, Úlfar, and Klaus Simon. "Quality Assessment of the JPEG 2000 Compression Standard." Conference on Colour in Graphics, Imaging, and Vision 2, no. 1 (2004): 337–42. http://dx.doi.org/10.2352/cgiv.2004.2.1.art00067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Starosolski, Roman. "Hybrid Adaptive Lossless Image Compression Based on Discrete Wavelet Transform." Entropy 22, no. 7 (2020): 751. http://dx.doi.org/10.3390/e22070751.

Full text
Abstract:
A new hybrid transform for lossless image compression exploiting a discrete wavelet transform (DWT) and prediction is the main new contribution of this paper. Simple prediction is generally considered ineffective in conjunction with DWT but we applied it to subbands of DWT modified using reversible denoising and lifting steps (RDLSs) with step skipping. The new transform was constructed in an image-adaptive way using heuristics and entropy estimation. For a large and diverse test set consisting of 499 photographic and 247 non-photographic (screen content) images, we found that RDLS with step skipping allowed effectively combining DWT with prediction. Using prediction, we nearly doubled the JPEG 2000 compression ratio improvements that could be obtained using RDLS with step skipping. Because for some images it might be better to apply prediction instead of DWT, we proposed compression schemes with various tradeoffs, which are practical contributions of this study. Compared with unmodified JPEG 2000, one scheme improved the compression ratios of photographic and non-photographic images, on average, by 1.2% and 30.9%, respectively, at the cost of increasing the compression time by 2% and introducing only minimal modifications to JPEG 2000. Greater ratio improvements, exceeding 2% and 32%, respectively, are attainable at a greater cost.
APA, Harvard, Vancouver, ISO, and other styles
17

Livada, Časlav, Tomislav Horvat, and Alfonzo Baumgartner. "Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000." Applied Sciences 13, no. 5 (2023): 3152. http://dx.doi.org/10.3390/app13053152.

Full text
Abstract:
In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the range coder is used as a lossless compression method. The objective and subjective quality evaluation of the reconstructed image illustrates the efficiency of this new compression method and is compared with the current standards, JPEG and JPEG 2000.
APA, Harvard, Vancouver, ISO, and other styles
18

Sanjith, S., and R. Ganesan. "Determining the Quality of Compression in High Resolution Satellite Images Using Different Compression Methods." International Journal of Engineering Research in Africa 20 (October 2015): 202–17. http://dx.doi.org/10.4028/www.scientific.net/jera.20.202.

Full text
Abstract:
Measuring the quality of image is very complex and hard process since the opinion of the humans are affected by physical and psychological parameters. So many techniques are invented and proposed for image quality analysis but none of the methods suits best for it. Assessment of image quality plays an important role in image processing. In this paper we present the experimental results by comparing the quality of different satellite images (ALOS, RapidEye, SPOT4, SPOT5, SPOT6, SPOTMap) after compression using four different compression methods namely Joint Photographic Expert Group (JPEG), Embedded Zero tree Wavelet (EZW), Set Partitioning in Hierarchical Tree (SPIHT), Joint Photographic Expert Group – 2000 (JPEG 2000). The Mean Square Error (MSE), Signal to Noise Ratio (SNR) and Peak Signal to Noise Ratio (PSNR) values are calculated to determine the quality of the high resolution satellite images after compression.
APA, Harvard, Vancouver, ISO, and other styles
19

A. Al-Khayyat, Kamal, Imad F. Al-Shaikhli, and V. Vijayakuumar. "On Randomness of Compressed Data Using Non-parametric Randomness Tests." Bulletin of Electrical Engineering and Informatics 7, no. 1 (2018): 63–69. http://dx.doi.org/10.11591/eei.v7i1.902.

Full text
Abstract:
Four randomness tests were used to test the outputs (compressed files) of four lossless compressions algorithms: JPEG-LS and JPEG-2000 algorithms are image-dedicated algorithms, while 7z and Bzip2 algorithms are general-purpose algorithms. The relationship between the result of randomness tests and the compression ratio was investigated. This paper reports the important relationship between the statistical information behind these tests and the compression ratio. It shows that, this statistical information almost the same at least, for the four lossless algorithms under test. This information shows that 50 % of the compressed data are grouping of runs, 50% of it has positive signs when comparing adjacent values, 66% of the files containing turning points, and using Cox-Stuart test, 25% of the file give positive signs, which reflects the similarity aspects of compressed data. When it comes to the relationship between the compression ratio and these statistical information, the paper shows also, that, the greater values of these statistical numbers, the greater compression ratio we get.
APA, Harvard, Vancouver, ISO, and other styles
20

Kamal, A. Al-Khayyat, F. Al-Shaikhli Imad, and Vijayakuumar V. "On Randomness of Compressed Data Using Non-parametric Randomness Tests." Bulletin of Electrical Engineering and Informatics 7, no. 1 (2018): 63–69. https://doi.org/10.11591/eei.v7i1.902.

Full text
Abstract:
Four randomness tests were used to test the outputs (compressed files) of four lossless compressions algorithms: JPEG-LS and JPEG-2000 algorithms are image-dedicated algorithms, while 7z and Bzip2 algorithms are generalpurpose algorithms. The relationship between the result of randomness tests and the compression ratio was investigated. This paper reports the important relationship between the statistical information behind these tests and the compression ratio. It shows that, this statistical information almost the same at least, for the four lossless algorithms under test. This information shows that 50 % of the compressed data are grouping of runs, 50% of it has positive signs when comparing adjacent values, 66% of the files containing turning points, and using Cox-Stuart test, 25% of the file give positive signs, which reflects the similarity aspects of compressed data. When it comes to the relationship between the compression ratio and these statistical information, the paper shows also, that, the greater values of these statistical numbers, the greater compression ratio we get.
APA, Harvard, Vancouver, ISO, and other styles
21

M.K., Bouza. "Analysis and modification of graphic data compression algorithms." Artificial Intelligence 25, no. 4 (2020): 32–40. http://dx.doi.org/10.15407/jai2020.04.032.

Full text
Abstract:
The article examines the algorithms for JPEG and JPEG-2000 compression of various graphic images. The main steps of the operation of both algorithms are given, their advantages and disadvantages are noted. The main differences between JPEG and JPEG-2000 are analyzed. It is noted that the JPEG-2000 algorithm allows re-moving visually unpleasant effects. This makes it possible to highlight important areas of the image and improve the quality of their compression. The features of each step of the algorithms are considered and the difficulties of their implementation are compared. The effectiveness of each algorithm is demonstrated by the example of a full-color image of the BSU emblem. The obtained compression ratios were obtained and shown in the corresponding tables using both algorithms. Compression ratios are obtained for a wide range of quality values from 1 to ten. We studied various types of images: black and white, business graphics, indexed and full color. A modified LZW-Lempel-Ziv-Welch algorithm is presented, which is applicable to compress a variety of information from text to images. The modification is based on limiting the graphic file to 256 colors. This made it possible to index the color with one byte instead of three. The efficiency of this modification grows with increasing image sizes. The modified LZW-algorithm can be adapted to any image from single-color to full-color. The prepared tests were indexed to the required number of colors in the images using the FastStone Image Viewer program. For each image, seven copies were obtained, containing 4, 8, 16, 32, 64, 128 and 256 colors, respectively. Testing results showed that the modified version of the LZW algorithm allows for an average of twice the compression ratio. However, in a class of full-color images, both algorithms showed the same results. The developed modification of the LZW algorithm can be successfully applied in the field of site design, especially in the case of so-called flat design. The comparative characteristics of the basic and modified methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Rahman Hasso, Maha, and Sahlah Ali. "Applying Standard JPEG 2000 Part One on Image Compression." AL-Rafidain Journal of Computer Sciences and Mathematics 14, no. 1 (2020): 13–33. http://dx.doi.org/10.33899/csmj.2020.164796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nguyen, C., and G. R. Redinbo. "Fault tolerance design in JPEG 2000 image compression system." IEEE Transactions on Dependable and Secure Computing 2, no. 1 (2005): 57–75. http://dx.doi.org/10.1109/tdsc.2005.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nosratinia, A. "Postprocessing of JPEG-2000 images to remove compression artifacts." IEEE Signal Processing Letters 10, no. 10 (2003): 296–99. http://dx.doi.org/10.1109/lsp.2003.817179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nuzhat Mobeen, Channappa A, and Suresh B. "Image compression methods for efficient storage and transmission." World Journal of Advanced Research and Reviews 11, no. 1 (2021): 265–78. http://dx.doi.org/10.30574/wjarr.2021.11.1.0172.

Full text
Abstract:
This paper presents a detailed and comprehensive review of image compression methods, emphasizing their role in optimizing both storage and transmission efficiency across various domains, from everyday use in social media to specialized applications like medical imaging and satellite data processing. We systematically explore both traditional and contemporary image compression techniques, categorizing them into lossless and lossy methods, transform-based approaches, and the latest advancements in machine learning-based compression. Lossless compression techniques, including Run-Length Encoding (RLE), Huffman Coding, Lempel-Ziv-Welch (LZW), and the Portable Network Graphics (PNG) format, are discussed for their ability to preserve image quality perfectly, albeit at the cost of relatively lower compression ratios. Conversely, lossy compression methods, such as JPEG and fractal compression, offer significant file size reduction by discarding non-essential data, while still maintaining acceptable visual quality for many practical applications. We further delve into transform-based approaches like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT), which form the backbone of popular standards such as JPEG and JPEG 2000, enabling more efficient data representation in the frequency domain. Additionally, the study highlights emerging machine learning and deep learning techniques, such as autoencoders and Generative Adversarial Networks (GANs), that are pushing the boundaries of image compression by achieving unprecedented compression ratios while minimizing perceptual loss in image quality. Through a comparative analysis, we evaluate these methods based on multiple performance metrics, including compression ratio, computational complexity, image fidelity (measured via Peak Signal-to-Noise Ratio, PSNR, and Structural Similarity Index, SSIM), and their practical applications across different industries. Our findings suggest that while traditional methods such as JPEG, PNG, and JPEG 2000 remain widely adopted due to their simplicity and efficiency, emerging techniques driven by deep learning show great potential in adapting to specific image characteristics, achieving higher compression ratios, and better preserving image quality under extreme compression. Finally, this paper identifies key challenges and trends in the field, such as the increasing computational demands of advanced techniques, the need for adaptive compression strategies, and the importance of standardization for broad industry adoption. We conclude that while traditional methods will continue to play a significant role, the future of image compression lies in the integration of machine learning and content-aware technologies that dynamically optimize compression performance across diverse image types and application contexts.
APA, Harvard, Vancouver, ISO, and other styles
26

Pinheiro, Antonio. "JPEG column: 82nd JPEG meeting in Lisbon, Portugal." ACM SIGMultimedia Records 11, no. 1 (2019): 1. http://dx.doi.org/10.1145/3458462.3458468.

Full text
Abstract:
JPEG has been the most common representation format of digital images for more than 25 years. Other image representation formats have been standardised by JPEG committee like JPEG 2000 or more recently JPEG XS. Furthermore, JPEG has been extended with new functionalities like HDR or alpha plane coding with the JPEG XT standard, and more recently with a reference software. Another solutions have been also proposed by different players with limited success. The JPEG committee decided it is the time to create a new working item, named JPEG XL, that aims to develop an image coding standard with increased quality and flexibility combined with a better compression efficiency. The evaluation of the call for proposals responses had already confirmed the industry interest, and development of core experiments has now begun. Several functionalities will be considered, like support for lossless transcoding of images represented with JPEG standard.
APA, Harvard, Vancouver, ISO, and other styles
27

Kesavamurthy, T., and Subha Rani. "Dicom Color Medical Image Compression using 3D-SPIHT for Pacs Application." International Journal of Biomedical Science 4, no. 2 (2008): 113–19. http://dx.doi.org/10.59566/ijbs.2008.4113.

Full text
Abstract:
The proposed algorithm presents an application of 3D-SPIHT algorithm to color volumetric dicom medical images using 3D wavelet decomposition and a 3D spatial dependence tree. The wavelet decomposition is accomplished with biorthogonal 9/7 filters. 3D-SPIHT is the modern-day benchmark for three dimensional image compressions. The three-dimensional coding is based on the observation that the sequences of images are contiguous in the temporal axis and there is no motion between slices. Therefore, the 3D discrete wavelet transform can fully exploit the inter-slices correlations. The set partitioning techniques involve a progressive coding of the wavelet coefficients. The 3D-SPIHT is implemented and the Rate-distortion (Peak Signal-to-Noise Ratio (PSNR) vs. bit rate) performances are presented for volumetric medical datasets by using biorthogonal 9/7. The results are compared with the previous results of JPEG 2000 standards. Results show that 3D-SPIHT method exploits the color space relationships as well as maintaining the full embeddedness required by color image sequences compression and gives better performance in terms of the PSNR and compression ratio than the JPEG 2000. The results suggest an effective practical implementation for PACS applications.
APA, Harvard, Vancouver, ISO, and other styles
28

V., Yaswanth Varma *. T. Nalini Prasad N. V. Phani Sai Kumar. "IMAGE COMPRESSION METHODS BASED ON TRANSFORM CODING AND FRACTAL CODING." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 10 (2017): 481–87. https://doi.org/10.5281/zenodo.1036337.

Full text
Abstract:
Image compression is process to remove the redundant information from the image so that only essential information can be stored to reduce the storage size, transmission bandwidth and transmission time. The essential information is extracted by various transforms techniques such that it can be reconstructed without losing quality and information of the image. In this research comparative analysis of image compression is done by four transform method, which are Discrete Cosine Transform (DCT), Discrete Wavelet Transform( DWT) &amp; Hybrid (DCT+DWT) Transform and fractal coding. MATLAB programs were written for each of the above method and concluded based on the results obtained that hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT algorithms in terms of peak signal to noise ratio (<em>PSNR</em>), as well as visual perception at higher compression ratio. The popular JPEG standard is widely used in digital cameras and web based image delivery. The wavelet transform, which is part of the new JPEG 2000 standard, claims to minimize some of the visually distracting artifacts that can appear in JPEG images. For one thing, it uses much larger blocks- selectable, but typically 1024 x 1024 pixels for compression, rather than the 8 X 8 pixel blocks used in the original JPEG method, which often produced visible boundaries. Fractal compression has also shown promise and claims to be able to enlarge images by inserting realistic detail beyond the resolution limit of the original.
APA, Harvard, Vancouver, ISO, and other styles
29

Hien, Thai Duy, Zensho Nakao, and Yen-Wei Chen. "Intelligent Logo Watermarking Based on Independent Component Analysis." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 4 (2004): 390–96. http://dx.doi.org/10.20965/jaciii.2004.p0390.

Full text
Abstract:
We present new intelligent logo watermarking based on independent component analysis (ICA) in which a binary logo watermark is embedded in a host image in a wavelet domain. To improve robustness, an image adaptive watermarking algorithm is applied by a stochastic approach based on a noise visibility function (NVF). The algorithm design, evaluation, and experimentation are described. Experimental results show that the logo watermark is perfectly extracted by ICA with excellent invisibility and with robustness against various image and digital processing operators and almost all compression algorithms such as Jpeg, jpeg 2000, SPIHT, EZW, and principal component analysis (PCA) based compression.
APA, Harvard, Vancouver, ISO, and other styles
30

Gertsiy, O. "COMPARATIVE ANALYSIS OF COMPACT METHODS REPRESENTATIONS OF GRAPHIC INFORMATION." Collection of scientific works of the State University of Infrastructure and Technologies series "Transport Systems and Technologies" 1, no. 37 (2021): 130–43. http://dx.doi.org/10.32703/2617-9040-2021-37-13.

Full text
Abstract:
The main characteristics of graphic information compression methods with losses and without losses (RLE, LZW, Huffman's method, DEFLATE, JBIG, JPEG, JPEG 2000, Lossless JPEG, fractal and Wawelet) are analyzed in the article. Effective transmission and storage of images in railway communication systems is an important task now. Because large images require large storage resources. This task has become very important in recent years, as the problems of information transmission by telecommunication channels of the transport infrastructure have become urgent. There is also a great need for video conferencing, where the task is to effectively compress video data - because the greater the amount of data, the greater the cost of transmitting information, respectively. Therefore, the use of image compression methods that reduce the file size is the solution to this task. The study highlights the advantages and disadvantages of compression methods. The comparative analysis the basic possibilities of compression methods of graphic information is carried out. The relevance lies in the efficient transfer and storage of graphical information, as big data requires large resources for storage. The practical significance lies in solving the problem of effectively reducing the data size by applying known compression methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Jones, Paul W. "Efficient JPEG 2000 VBR Compression with True Constant Perceived Quality." SMPTE Motion Imaging Journal 116, no. 7-8 (2007): 257–65. http://dx.doi.org/10.5594/j11425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kim, Gyoung Min, and Kil Joong Kim. "Visually Lossless Threshold: JPEG 2000 compression of Digital Chest Radiographs." Journal of the Korean Society of Radiology 63, no. 4 (2010): 371. http://dx.doi.org/10.3348/jksr.2010.63.4.371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Nasri, Mohsen, Abdelhamid Helali, Halim Sghaier, and Hassen Maaref. "Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks." TELKOMNIKA (Telecommunication Computing Electronics and Control) 9, no. 2 (2011): 311. http://dx.doi.org/10.12928/telkomnika.v9i2.702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Rabbani, Majid, and Rajan Joshi. "An overview of the JPEG 2000 still image compression standard." Signal Processing: Image Communication 17, no. 1 (2002): 3–48. http://dx.doi.org/10.1016/s0923-5965(01)00024-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fedoseev, V., and T. Androsova. "Watermarking algorithms for JPEG 2000 lossy compressed images." Information Technology and Nanotechnology, no. 2391 (2019): 366–70. http://dx.doi.org/10.18287/1613-0073-2019-2391-366-370.

Full text
Abstract:
In the paper, we propose two watermarking algorithms for semi-fragile data hiding in JPEG 2000 lossy compressed images. Both algorithms are based on the concept of quantization index modulation. These methods have a property of semi-fragility to the image quality. It means that the hidden information is preserved after high-quality compression, and is destroyed in the case of significant degradation. Experimental investigations confirm this property for both algorithms. They also show that the introduced embedding distortions in terms of PSNR and PSNR-HVS are in almost linear dependence on the quantization parameter. It allows us to control the quality at an acceptable level when information embedding
APA, Harvard, Vancouver, ISO, and other styles
36

Nguyen, Anthony, Vinod Chandran, and Sridha Sridharan. "Gaze-J2K: gaze-influenced image coding using eye trackers and JPEG 2000." Journal of Telecommunications and Information Technology, no. 1 (March 30, 2006): 3–10. http://dx.doi.org/10.26636/jtit.2006.1.364.

Full text
Abstract:
The use of visual content in applications of the digital computer has increased dramatically with the advent of the Internet and world wide web. Image coding standards such as JPEG 2000 have been developed to provide scalable and progressive compression of imagery. Advances in image and video analysis are also making human-computer interaction multi-modal rather than through the use of a keyboard or mouse. An eye tracker is an example input device that can be used by an application that displays visual content to adapt to the viewer. Many features are required of the format to facilitate this adaptation, and some are already part of image coding standards such as JPEG 2000. This paper presents a system incorporating the use of eye tracking and JPEG 2000, called Gaze-J2K, to allow a customised encoding of an image by using a user’s gaze pattern. The gaze pattern is used to automatically determine and assign importance to fixated regions in an image, and subsequently constrain the encoding of the image to these regions.
APA, Harvard, Vancouver, ISO, and other styles
37

Sowmithri, K. "An Iterative Lifting Scheme on DCT Coefficients for Image Coding." International Journal of Students' Research in Technology & Management 3, no. 4 (2015): 317–19. http://dx.doi.org/10.18510/ijsrtm.2015.341.

Full text
Abstract:
Image coding is considered to be more effective, as it reduces number of bits required to store and/or to transmit image data. Transform based image coders play a significant role as they decorrelate the spatial low level information. It is found utilization in International compression standards such as JPEG, JPEG 2000, MPEG and H264. The choice of transform is an important issue in all these transforms coding schemes. Most of the literature suggests either Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT). In this proposed work, the energy preservation of DCT coefficients is analysed, and to down sample these coefficients, lifting scheme is iteratively applied so as to compensate the artifacts that appear in the reconstructed picture, and to yield the higher compression ratio. This is followed by scalar quantization and entropy coding, as in JPEG. The performance of the proposed iterative lifting scheme, employed on decorrelated DCT coefficients is measured with standard Peak Signal to Noise Ratio (PSNR) and the results are encouraging.
APA, Harvard, Vancouver, ISO, and other styles
38

Suseela, G., and Asnathvicty Phamila Y. "LOW BITRATE HYBRID SECURED IMAGE COMPRESSION FOR WIRELESS IMAGE SENSOR NETWORK." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 101. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19578.

Full text
Abstract:
Wireless image sensor networks are capable of sensing, processing and transmitting the visual data along with the scalar data and have attainedwide attention in sensitive applications such as visual surveillance, habitat monitoring, and ubiquitous computing. The sensor nodes in the network are resource constrained in nature. Since the image data are huge always high computational cost and energy budget are levied on the sensor nodes. The compression standards JPEG and JPEG 2000 are not feasible as they involve complex computations. To stretch out the life span of these nodes,it is required to have low complex and low bitrate image compression techniques exclusively designed for this platform. The complicated scenarioof wireless sensor network in processing and transmitting image data has been addressed by a low complex hybrid secured image compression technique using discrete wavelet transform and Bin discrete cosine transformation.
APA, Harvard, Vancouver, ISO, and other styles
39

Paz, Juan, Marlen Pérez, Peter Schelkens, and José Rodríguez. "Impact of JPEG 2000 compression on lesion detection in MR imaging." Medical Physics 36, no. 11 (2009): 4967–76. http://dx.doi.org/10.1118/1.3233783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Blinder, David, Tim Bruylants, Heidi Ottevaere, Adrian Munteanu, and Peter Schelkens. "JPEG 2000-based compression of fringe patterns for digital holographic microscopy." Optical Engineering 53, no. 12 (2014): 123102. http://dx.doi.org/10.1117/1.oe.53.12.123102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kang, B. J., H. S. Kim, C. S. Park, J. J. Choi, J. H. Lee, and B. G. Choi. "Acceptable compression ratio of full-field digital mammography using JPEG 2000." Clinical Radiology 66, no. 7 (2011): 609–13. http://dx.doi.org/10.1016/j.crad.2011.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Krishnan, Shoba. "Comparison of IRIS Image Compression using JPEG 2000 and SPIHT algorithm." IOSR Journal of Electronics and Communication Engineering 4, no. 4 (2013): 5–9. http://dx.doi.org/10.9790/2834-0440509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nusrat, Zerin, Md Firoz Mahmud, and W. David Pan. "Efficient Compression of Red Blood Cell Image Dataset Using Joint Deep Learning-Based Pattern Classification and Data Compression." Electronics 14, no. 8 (2025): 1556. https://doi.org/10.3390/electronics14081556.

Full text
Abstract:
Millions of people across the globe are affected by the life-threatening disease of Malaria. To achieve the remote screening and diagnosis of the disease, the rapid transmission of large-size microscopic images is necessary, thereby demanding efficient data compression techniques. In this paper, we argued that well-classified images might lead to higher overall compression of the images in the datasets. To this end, we investigated the novel approach of joint pattern classification and compression of microscopic red blood cell images. Specifically, we used deep learning models, including a vision transformer and convolutional autoencoders, to classify red blood cell images into normal and Malaria-infected patterns, prior to applying compression on the images classified into different patterns separately. We evaluated the impacts of varying classification accuracy on overall image compression efficiency. The results highlight the importance of the accurate classification of images in improving overall compression performance. We demonstrated that the proposed deep learning-based joint classification/compression method offered superior performance compared with traditional lossy compression approaches such as JPEG and JPEG 2000. Our study provides useful insights into how deep learning-based pattern classification could benefit data compression, which would be advantageous in telemedicine, where large-image-size reduction and high decoded image quality are desired.
APA, Harvard, Vancouver, ISO, and other styles
44

Singh, Piyush Kumar, Ravi Shankar Singh, and Kabindra Nath Rai. "A Parallel Algorithm for Wavelet Transform-Based Color Image Compression." Journal of Intelligent Systems 27, no. 1 (2018): 81–90. http://dx.doi.org/10.1515/jisys-2017-0015.

Full text
Abstract:
Abstract Wavelet transforms emerge as one of the popular techniques in image compression. This technique is accepted by the JPEG Committee for the next-generation image compression standard JPEG-2000. Convolution-based strategy is widely used in calculating the wavelet transform of the image. A convolution-based wavelet transform consists of a large number of multiplications and additions. A color image consists of a two-dimensional matrix each for red, green, and blue colors. An ordinary way to calculate the wavelet transform of a color image includes calculating the transform of the intensity matrix of the red, green, and blue components. In this article, we present a parallel algorithm for calculating the convolution-based wavelet transform of the red, green, and blue intensity components simultaneously in color images, which can run on commonly used processors. This means that it needs no extra hardware. The results are also compared to the nonparallel algorithm based on compression time, mean square error, compression ratio, and peak signal-to-noise ratio. Complexity analysis and comparative complexity analysis with some other papers are also shown here.
APA, Harvard, Vancouver, ISO, and other styles
45

Radosavljević, Miloš, Branko Brkljač, Predrag Lugonja, et al. "Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study." Remote Sensing 12, no. 10 (2020): 1590. http://dx.doi.org/10.3390/rs12101590.

Full text
Abstract:
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even more for individual users and third party stakeholders. Hence, research in the field of efficient remote sensing data handling and manipulation has become a very active research topic (from both storage and communication perspectives). Driven by the rapid growth in the volume of optical satellite measurements, in this work we explore the lossy compression technique for multispectral satellite images. We give a comprehensive analysis of the High Efficiency Video Coding (HEVC) still-image intra coding part applied to the multispectral image data. Thereafter, we analyze the impact of the distortions introduced by the HEVC’s intra compression in the general case, as well as in the specific context of crop classification application. Results show that HEVC’s intra coding achieves better trade-off between compression gain and image quality, as compared to standard JPEG 2000 solution. On the other hand, this also reflects in the better performance of the designed pixel-based classifier in the analyzed crop classification task. We show that HEVC can obtain up to 150:1 compression ratio, when observing compression in the context of specific application, without significantly losing on classification performance compared to classifier trained and applied on raw data. In comparison, in order to maintain the same performance, JPEG 2000 allows compression ratio up to 70:1.
APA, Harvard, Vancouver, ISO, and other styles
46

Radosavljević, Miloš, Branko Brkljač, Predrag Lugonja, et al. "Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study." Lossy Compression of Multispectral Satellite Images with Application to Crop Thematic Mapping: A HEVC Comparative Study 12, no. 10 (2020): 1590. https://doi.org/10.3390/rs12101590.

Full text
Abstract:
Remote sensing applications have gained in popularity in recent years, which has resulted in vast amounts of data being produced on a daily basis. Managing and delivering large sets of data becomes extremely difficult and resource demanding for the data vendors, but even more for individual users and third party stakeholders. Hence, research in the field of efficient remote sensing data handling and manipulation has become a very active research topic (from both storage and communication perspectives). Driven by the rapid growth in the volume of optical satellite measurements, in this work we explore the lossy compression technique for multispectral satellite images. We give a comprehensive analysis of the High Efficiency Video Coding (HEVC) still-image intra coding part applied to the multispectral image data. Thereafter, we analyze the impact of the distortions introduced by the HEVC&rsquo;s intra compression in the general case, as well as in the specific context of crop classification application. Results show that HEVC&rsquo;s intra coding achieves better trade-off between compression gain and image quality, as compared to standard JPEG 2000 solution. On the other hand, this also reflects in the better performance of the designed pixel-based classifier in the analyzed crop classification task. We show that HEVC can obtain up to 150:1 compression ratio, when observing compression in the context of specific application, without significantly losing on classification performance compared to classifier trained and applied on raw data. In comparison, in order to maintain the same performance, JPEG 2000 allows compression ratio up to 70:1
APA, Harvard, Vancouver, ISO, and other styles
47

Zhu, Yaohua, Mingsheng Huang, Yanghang Zhu, and Yong Zhang. "A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images." Applied Sciences 15, no. 5 (2025): 2826. https://doi.org/10.3390/app15052826.

Full text
Abstract:
Traditional JPEG series image compression algorithms have limitations in speed. To improve the storage and transmission of 14-bit/pixel images acquired by infrared line-scan detectors, a novel method is introduced for achieving high-speed and highly efficient compression of line-scan infrared images. The proposed method utilizes the features of infrared images to reduce image redundancy and employs improved Huffman coding for entropy coding. The improved Huffman coding addresses the low-probability long coding of 14-bit images by truncating long codes, which results in low complexity and minimal loss in the compression ratio. Additionally, a method is proposed to obtain a Huffman code table that bypasses the pixel counting process required for entropy coding, thereby improving the compression speed. The final implementation is a low-complexity lossless image compression algorithm that achieves fast encoding through simple table lookup rules. The proposed method results in only a 10% loss in compression performance compared to JPEG 2000, while achieving a 20-fold speed improvement. Compared to dictionary-based methods, the proposed method can achieve high-speed compression while maintaining high compression efficiency, making it particularly suitable for the high-speed, high-efficiency lossless compression of line-scan panoramic infrared images. The code table compression effect is 5% lower than the theoretical value. The algorithm can also be applied to analyze images with more bits.
APA, Harvard, Vancouver, ISO, and other styles
48

kumar, M. Anil. "Design and Implementation of Zynq-Based Reconfigurable System for Jpeg 2000 Compression." International Journal for Research in Applied Science and Engineering Technology V, no. XI (2017): 402–7. http://dx.doi.org/10.22214/ijraset.2017.11060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Usevitch, B. E. "A tutorial on modern lossy wavelet image compression: foundations of JPEG 2000." IEEE Signal Processing Magazine 18, no. 5 (2001): 22–35. http://dx.doi.org/10.1109/79.952803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Carvajal, Gisela, Barbara Penna, and Enrico Magli. "Unified Lossy and Near-Lossless Hyperspectral Image Compression Based on JPEG 2000." IEEE Geoscience and Remote Sensing Letters 5, no. 4 (2008): 593–97. http://dx.doi.org/10.1109/lgrs.2008.2000651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography