Academic literature on the topic 'JPEG 2000 Compression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'JPEG 2000 Compression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "JPEG 2000 Compression"

1

Skog, Kasper, Tomáš Kohout, Tomáš Kašpárek, Antti Penttilä, Monika Wolfmayr, and Jaan Praks. "Lossless Hyperspectral Image Compression in Comet Interceptor and Hera Missions with Restricted Bandwith." Remote Sensing 17, no. 5 (2025): 899. https://doi.org/10.3390/rs17050899.

Full text
Abstract:
Lossless image compression is vital for missions with limited data transmission bandwidth. Reducing file sizes enables faster transmission and increased scientific gains from transient events. This study compares two wavelet-based image compression algorithms, CCSDS 122.0 and JPEG 2000, used in the European Space Agency Comet Interceptor and Hera missions, respectively, in varying scenarios. The JPEG 2000 implementation is sourced from the JasPer library, whereas a custom implementation was written for CCSDS 122.0. The performance analysis for both algorithms consists of compressing simulated asteroid images in the visible and near-infrared spectral ranges. In addition, all test images were noise-filtered to study the effect of the amount of noise on both compression ratio and speed. The study finds that JPEG 2000 achieves consistently higher compression ratios and benefits from decreased noise more than CCSDS 122.0. However, CCSDS 122.0 produces comparable results faster than JPEG 2000 and is substantially less computationally complex. On the contrary, JPEG 2000 allows dynamic (entropy-permitting) reduction in the bit depth of internal data structures to 8 bits, halving the memory allocation, while CCSDS 122.0 always works in 16-bit mode. These results contribute valuable knowledge to the behavioral characteristics of both algorithms and provide insight for entities planning on using either algorithm on board planetary missions.
APA, Harvard, Vancouver, ISO, and other styles
2

Zabala, Alaitz, Raffaele Vitulli, and Xavier Pons. "Impact of CCSDS-IDC and JPEG 2000 Compression on Image Quality and Classification." Journal of Electrical and Computer Engineering 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/761067.

Full text
Abstract:
This study measures the impact of both on-board and user-side lossy image compression (CCSDS-IDC and JPEG 2000) on image quality and classification. The Sentinel-2 Image Performance Simulator was modified to include these compression algorithms in order to produce Sentinel-2 simulated images with on-board lossy compression. A multitemporal set of Landsat images was used for the user-side compression scenario in order to study a crop area. The performance of several compressors was evaluated by computing the Signal-to-Noise Ratio (SNR) of the compressed images. The overall accuracy of land-cover classifications of these images was also evaluated. The results show that on-board CCSDS performs better than JPEG 2000 in terms of compression fidelity, especially at lower compression ratios (from CR 2:1 up to CR 4:1, i.e., 8 to 4 bpppb). The effect of compression on land cover classification follows the same trends, but compression fidelity may not be enough to assess the impact of compression on end-user applications. If compression is applied by end-users, the results show that 3D-JPEG 2000 obtains higher compression fidelity than CCSDS and JPEG 2000 with other parameterizations. This is due to the high dynamic range of the images (representing reflectances*10000), which JPEG 2000 is able to exploit better.
APA, Harvard, Vancouver, ISO, and other styles
3

OH, TICK HUI, and ROSLI BESAR. "MEDICAL IMAGE COMPRESSION USING JPEG-2000 AND JPEG: A COMPARISON STUDY." Journal of Mechanics in Medicine and Biology 02, no. 03n04 (2002): 313–28. http://dx.doi.org/10.1142/s021951940200054x.

Full text
Abstract:
Due to the constrained bandwidth and storage capacity, medical images must be compressed before transmission and storage. However, compression will reduce image fidelity, especially when the image is compressed of lower bit rate, which cannot be tolerated in medical field. In this paper, the compression performance of the new JPEG-2000 and the more conventional JPEG is studied. The parameters used for comparison include the compression efficiency, peak signal-to-noise ratio (PSNR), picture quality scale (PQS), and mean opinion score (MOS). Three types of medical images are used — X-ray, magnetic resonance imaging (MRI) and ultrasound. Overall, the study shows that JPEG-2000 compression is more acceptable and superior compare to JPEG for lossy compression.
APA, Harvard, Vancouver, ISO, and other styles
4

Bernstein, Herbert J., Alexei Soares, Kimberly Horvat, and Jean Jakoncic. "Massive Compression for High Data Rate Macromolecular Crystallography (HDRMX): Impact on Diffraction Data and Subsequent Structural Analysis." Structural Dynamics 12, no. 2_Supplement (2025): A147. https://doi.org/10.1063/4.0000456.

Full text
Abstract:
New higher-count-rate, integrating, large area X-ray detectors with framing rates as high as 17,400 images per second are beginning to be available. Data from these detectors are always compressed losslessly, but systems may not keep up with these data rates, and the files may still be larger than seems necessary. We propose that such MX experiments will require lossy compression algorithms to keep up with data throughput and capacity for long-term storage, but note that some information may be lost. Indeed, one might employ dramatic lossy compression only for archiving of data after structures are solved and published. Can we minimize this loss with acceptable impact on structural information? To explore this question, we have considered several approaches: summing short sequences of images to reverse fine phi-slicing, binning to create the effect of larger pixels, use of JPEG-2000 lossy wavelet-based compression from the movie industry, and the use of Hcompress which is a Haar-wavelet-based lossy compression from astronomy. In each of these last two methods one can specify approximately how much one wants the result to be compressed from the starting-file size. We have also explored the effect of combinations of summing and binning with Hcompress or JPEG-2000. These provide particularly effective lossy compressions that retain essential information for structure solution from coherent Bragg reflections. There are many lossy compressions to consider. For this work we have experimented with lossy compression of less than 10 to over 50,000. Caution is needed to avoid over-compression that loses significant structural data and damages the quality of peak integrations. The combined use of modest degrees of binning and summing actually strengthens the weak reflections, which helps to protect them from the impact of the massive compressions provided by Hcompress or JPEG-2000. See the figures below for the impact of JPEG-2000 and Hcompress by themselves and in combination with binning and summing by two. The bottom half of the first figure shows the impact of JPEG-2000 compressions on weak reflections. J2k200 keeps the weak peaks findable but distorts the shoulders. The higher JPEG-2000 compressions do serious damage, losing the peak entirely for j2k1000. The top half of the first figure shows that Hcompress does better but also loses the peak entirely for high compressions. In the second figure, the weak peak has been protected by modest binning and summing allowing useful overall compression ratios well over 500 to 1 to be achieved in many cases and over 1000 to 1 in some cases. The lesson we have learned is that with care to protect weak peaks with modest binning and summing, and with care to avoid over-compression, in many cases, massive lossy compression can be applied successfully while retaining the information necessary to find and integrate Bragg reflections and to solve structures with good statistics.
APA, Harvard, Vancouver, ISO, and other styles
5

Marcelo, Alvin, Paul Fontelo, Miguel Farolan, and Hernani Cualing. "Effect of Image Compression on Telepathology." Archives of Pathology & Laboratory Medicine 124, no. 11 (2000): 1653–56. http://dx.doi.org/10.5858/2000-124-1653-eoicot.

Full text
Abstract:
Abstract Context.—For practitioners deploying store-and-forward telepathology systems, optimization methods such as image compression need to be studied. Objective.—To determine if Joint Photographic Expert Group (JPG or JPEG) compression, a lossy image compression algorithm, negatively affects the accuracy of diagnosis in telepathology. Design.—Double-blind, randomized, controlled trial. Setting.—University-based pathology departments. Participants.—Resident and staff pathologists at the University of Illinois, Chicago, and University of Cincinnati, Cincinnati, Ohio. Intervention.—Compression of raw images using the JPEG algorithm. Main Outcome Measures.—Image acceptability, accuracy of diagnosis, confidence level of pathologist, image quality. Results.—There was no statistically significant difference in the diagnostic accuracy between noncompressed (bit map) and compressed (JPG) images. There were also no differences in the acceptability, confidence level, and perception of image quality. Additionally, rater experience did not significantly correlate with degree of accuracy. Conclusions.—For providers practicing telepathology, JPG image compression does not negatively affect the accuracy and confidence level of diagnosis. The acceptability and quality of images were also not affected.
APA, Harvard, Vancouver, ISO, and other styles
6

Barina, David, and Ondrej Klima. "JPEG 2000: guide for digital libraries." Digital Library Perspectives 36, no. 3 (2020): 249–63. http://dx.doi.org/10.1108/dlp-03-2020-0014.

Full text
Abstract:
Purpose The joint photographic experts group (JPEG) 2000 image compression system is being used for cultural heritage preservation. The authors are aware of over a dozen of big memory institutions worldwide using this format. This paper aims to review and explain choices for end users to help resolve trade-offs that these users are likely to encounter in practice. Design/methodology/approach The JPEG 2000 format is quite complex and therefore sometimes considered as a preservation risk. A lossy compression is governed by a number of parameters that control compression speed and rate-distortion trade-off. Their inappropriate adjustment may fairly easily lead to sub-optimal compression performance. This paper provides general guidelines for selecting the most appropriate parameters for a specific application. Findings This paper serves as a guide for the preservation of digital heritage in cultural heritage institutions, including libraries, archives and museums. Originality/value This paper serves as a guide for the preservation of digital heritage in cultural heritage institutions, including libraries, archives and museums.
APA, Harvard, Vancouver, ISO, and other styles
7

Horrigue, Layla, Refka Ghodhbani, Albia Maqbool, et al. "Efficient Hardware Accelerator and Implementation of JPEG 2000 MQ Decoder Architecture." Engineering, Technology & Applied Science Research 14, no. 2 (2024): 13463–69. http://dx.doi.org/10.48084/etasr.7065.

Full text
Abstract:
Due to the extensive use of multimedia technologies, there is a pressing need for advancements and enhanced efficiency in picture compression. JPEG 2000 standard aims to meet the needs for encoding still pictures. JPEG 2000 is an internationally recognized standard for compressing still images. It provides a wide range of features and offers superior compression ratios and interesting possibilities when compared to traditional JPEG approaches. Nevertheless, the MQ decoder in the JPEG 2000 standard presents a substantial obstacle for real-time applications. In order to fulfill the demands of real-time processing, it is imperative to meticulously devise a high-speed MQ decoder architecture. This work presents a novel MQ decoder architecture that is both high-speed and area-efficient, making it comparable to previous designs and well-suited for chip implementation. The design is implemented using the VHDL hardware description language and is synthesized with Xilinx ISE 14.7 and Vivado 2015.1. The implementation findings show that the design functions at a frequency of 438.5 MHz on Virtex-6 and 757.5 MHz on Zync7000. For these particular frequencies, the calculated frame rate is 63.1 frames per second.
APA, Harvard, Vancouver, ISO, and other styles
8

KULKARNI, SHIVALI D., AMEYA K. NAIK, and NITIN S. NAGORI. "2D IMAGE TRANSMISSION USING BANDWIDTH EFFICIENT MAPPING TECHNIQUE." International Journal of Image and Graphics 10, no. 04 (2010): 559–73. http://dx.doi.org/10.1142/s0219467810003883.

Full text
Abstract:
Transmitting images over the wireless channel require that optimum compression ratio be maintained along with good image quality. This becomes a critical issue especially at higher values of bit error rate (BER). Joint photographic experts group (JPEG) standard and its successor JPEG 2000 provide excellent compression ratio but image reconstruction becomes highly difficult under extreme noise conditions. We present a mapping technique which gives better compression as compared to the existing techniques. Also, its performance is excellent even for higher bit error rates (BERs). This is supported by the results presented for JPEG, JPEG 2000, as well as mapping technique under fading channel conditions. Moreover, it is observed that the presence of high levels of noise has negligible effect on the reconstruction of images encoded using mapping technique.
APA, Harvard, Vancouver, ISO, and other styles
9

Žalik, Borut, Damjan Strnad, Štefan Kohek, et al. "FLoCIC: A Few Lines of Code for Raster Image Compression." Entropy 25, no. 3 (2023): 533. http://dx.doi.org/10.3390/e25030533.

Full text
Abstract:
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes.
APA, Harvard, Vancouver, ISO, and other styles
10

Skodras, A., C. Christopoulos, and T. Ebrahimi. "The JPEG 2000 still image compression standard." IEEE Signal Processing Magazine 18, no. 5 (2001): 36–58. http://dx.doi.org/10.1109/79.952804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "JPEG 2000 Compression"

1

Park, Min Jee, Jae Taeg Yu, Myung Han Hyun, and Sung Woong Ra. "A Development of Real Time Video Compression Module Based on Embedded Motion JPEG 2000." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596452.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV<br>In this paper, we develop a miniaturized real time video compression module (VCM) based on embedded motion JPEG 2000 using ADV212 and FPGA. We consider layout of components, values of damping resistors, and lengths of the pattern lines for optimal hardware design. For software design, we consider compression steps to monitor the status of the system and make the system robust. The weight of the developed VCM is approximately 4 times lighter than the previous development. Furthermore, experimental results show that the PSNR is increased about 3dB and the compression processing time is approximately 2 times faster than the previous development.
APA, Harvard, Vancouver, ISO, and other styles
2

Zeybek, Emre. "Compression multimodale du signal et de l’image en utilisant un seul codeur." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1060/document.

Full text
Abstract:
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée « Compression Multimodale ». Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image « porteuse » tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale<br>The objective of this thesis is to study and analyze a new compression strategy, whose principle is to compress the data together from multiple modalities by using a single encoder. This approach is called “Multimodal Compression” during which, an image and an audio signal is compressed together by a single image encoder (e.g. a standard), without the need for an integrating audio codec. The basic idea developed in this thesis is to insert samples of a signal by replacing some pixels of the "carrier's image” while preserving the quality of information after the process of encoding and decoding. This technique should not be confused with techniques like watermarking or stéganographie, since Multimodal Compression does not conceal any information with another. Two main objectives of Multimodal Compression are to improve the compression performance in terms of rate-distortion and to optimize the use of material resources of a given embedded system (e.g. acceleration of encoding/decoding time). In this report we study and analyze the variations of Multimodal Compression whose core function is to develop mixing and separation prior to coding and separation. Images and common signals as well as specific data such as biomedical images and signals are validated. This work is concluded by discussing the video of the strategy of Multimodal Compression
APA, Harvard, Vancouver, ISO, and other styles
3

Silva, Sandreane Poliana. "Comparação entre os métodos de compressão fractal e JPEG 2000 em um sistema de reconhecimento de íris." Universidade Federal de Uberlândia, 2008. https://repositorio.ufu.br/handle/123456789/14385.

Full text
Abstract:
Currently living in the digital age, so the manipulation of data and images is often all day. Due to the problem of space for storage of pictures and time of transmission, many compression techniques had been developed, and a great challenge is to make these techniques to bring good results in terms of compression rate, picture quality and processing time. The Fractal Compression technique developed by Fisher, was described, implemented and tested in this work and it brought great results, and considerable improvement in terms of execution time, which was rather low. Another area that has been emphasizing is the use of biometric techniques to the people recognition. A very used technique is the iris recognition that has shown enough reliability. Thus, connecting the two technologies brings great benefits. In this work, images of iris were compressed by the method implemented here and were made simulations of the technique iris recognition developed by Libor Maseck. The results show that it is possible to compress fractally the images without damage the recognition system. Comparisons were made and was possible realize that even with changes in pixels of images, the system remains very reliable, bringing benefits to storage space.<br>Atualmente vive-se na era digital, por isso a manipulação de dados e imagens é freqüente todos os dias. Devido ao problema de espaço para armazenamento dessas imagens e tempo de transmissão, foram desenvolvidas várias técnicas de compressão, e um grande desafio é fazer com que essas técnicas tragam bons resultados em termos de taxa de compressão, qualidade da imagem e tempo de processamento. A técnica de compressão Fractal desenvolvida por Fisher, foi descrita, implementada e testada neste trabalho e trouxe ótimos resultados, e melhoria considerável em termos de tempo de execução, que foi bastante reduzido. Outra área que vem se destacando é o uso de técnicas biométricas para reconhecimento de pessoas. Uma técnica muito usada é o reconhecimento de íris que tem mostrado bastante contabilidade. Assim, aliar as duas tecnologias traz grandes benefícios. No presente trabalho, imagens de íris foram comprimidas pelo método aqui implementado e foram realizadas simulações da técnica de reconhecimento de íris desenvolvida por Maseck. Os resultados mostram que é possível comprimir fractalmente as imagens sem prejudicar o sistema de reconhecimento. Comparações foram realizadas e foi possível perceber que mesmo havendo mudanças nos pixels das imagens, o sistema permanece bastante confiavel, trazendo vantagens em espaço de armazenamento.<br>Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
4

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zeybek, Emre. "Compression multimodale du signal et de l'image en utilisant un seul codeur." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00665757.

Full text
Abstract:
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée " Compression Multimodale ". Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image " porteuse " tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Hsueh-szu, and Benjamin Kupferschmidt. "Time Stamp Synchronization in Video Systems." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California<br>Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
APA, Harvard, Vancouver, ISO, and other styles
7

Kivci, Erdem Turker. "Development Of A Methodology For Geospatial Image Streaming." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612570/index.pdf.

Full text
Abstract:
Serving geospatial data collected from remote sensing methods (satellite images, areal photos, etc.) have become crutial in many geographic information system (GIS) applications such as disaster management, municipality applications, climatology, environmental observations, military applications, etc. Even in today&rsquo<br>s highly developed information systems, geospatial image data requies huge amount of physical storage spaces and such characteristics of geospatial image data make its usage limited in above mentioned applications. For this reason, web-based GIS applications can benefit from geospatial image streaming through web-based architectures. Progressive transmission of geospatial image and map data on web-based architectures is implemented with the developed image streaming methodology. The software developed allows user interaction in such a way that the users will visualize the images according to their level of detail. In this way geospatial data is served to the users in an efficient way. The main methods used to transmit geospatial images are serving tiled image pyramids and serving wavelet based compressed bitstreams. Generally, in GIS applications, tiled image pyramids that contain copies of raster datasets at different resolutions are used rather than differences between resolutions. Thus, redundant data is transmitted from GIS server with different resolutions of a region while using tiled image pyramids. Wavelet based methods decreases redundancy. On the other hand methods that use wavelet compressed bitsreams requires to transform the whole dataset before the transmission. A hybrid streaming methodology is developed to decrease the redundancy of tiled image pyramids integrated with wavelets which does not require transforming and encoding whole dataset. Tile parts&rsquo<br>coefficients produced with the methodlogy are encoded with JPEG 2000, which is an efficient technology to compress images at wavelet domain.
APA, Harvard, Vancouver, ISO, and other styles
8

Kaše, David. "Komprese obrazu pomocí vlnkové transformace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234996.

Full text
Abstract:
This thesis deals with image compression using wavelet, contourlet and shearlet transformation. It starts with quick look at image compression problem a quality measurement. Next are presented basic concepts of wavelets, multiresolution analysis and scaling function and detailed look at each transform. Representatives of algorithms for coeficients coding are EZW, SPIHT and marginally EBCOT. In second part is described design and implementation of constructed library. Last part compare result of transforms with format JPEG 2000. Comparison resulted in determining type of image in which implemented contourlet and shearlet transform were more effective than wavelet. Format JPEG 2000 was not exceeded.
APA, Harvard, Vancouver, ISO, and other styles
9

Mhamdi, Maroua. "Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2281/document.

Full text
Abstract:
Cette thèse est consacrée à l'amélioration des performances de codage/décodage de systèmes de transmission d'images fixes sur des canaux bruités et réalistes. Nous proposons, à cet effet, le développement de méthodes de transmission d'images optimisées en se focalisant sur les deux couches application et physique des réseaux sans fil. Au niveau de la couche application et afin d'assurer une bonne qualité de service, on utilise des algorithmes de compression efficaces permettant au récepteur de reconstruire l'image avec un maximum de fidélité (JPEG2000 et JPWL). Afin d'assurer une transmission sur des canaux sans fil avec un minimum de TEB à la réception, des techniques de transmission, de codage et de modulation avancées sont utilisées au niveau de la couche physique (système MIMO-OFDM, modulation adaptative, CCE, etc). Dans un premier temps, nous proposons un système de transmission robuste d'images codées JPWL intégrant un schéma de décodage conjoint source-canal basé sur des techniques de décodage à entrées pondérées. On considère, ensuite, l'optimisation d'une chaîne de transmission d'images sur un canal MIMO-OFDM sans fil réaliste. La stratégie de transmission d'images optimisée s'appuie sur des techniques de décodage à entrées pondérées et une approche d'adaptation de lien. Ainsi, le schéma de transmission proposé offre la possibilité de mettre en oeuvre conjointement de l'UEP, de l'UPA, de la modulation adaptative, du codage de source adaptatif et de décodage conjoint pour améliorer la qualité de l'image à la réception. Dans une seconde partie, nous proposons un système robuste de transmission de flux progressifs basé sur le principe de turbo décodage itératif de codes concaténés offrant une stratégie de protection inégale de données. Ainsi, l'originalité de cette étude consiste à proposer des solutions performantes d'optimisation globale d'une chaîne de communication numérique pour améliorer la qualité de transmission<br>This work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality
APA, Harvard, Vancouver, ISO, and other styles
10

Abot, Julien. "Stratégie de codage conjoint pour la transmission d'images dans un système MIMO." Thesis, Poitiers, 2012. http://www.theses.fr/2012POIT2296/document.

Full text
Abstract:
Ce travail de thèse présente une stratégie de transmission exploitant la diversité spatiale pour la transmission d'images sur canal sans fil. On propose ainsi une approche originale mettant en correspondance la hiérarchie de la source avec celle des sous-canauxSISO issus de la décomposition d'un canal MIMO. On évalue les performances des précodeurs usuels dans le cadre de cette stratégie via une couche physique réaliste, respectant la norme IEEE802.11n, et associé à un canal de transmission basé sur un modèle de propagation à tracé de rayons 3D. On montre ainsi que les précodeurs usuels sont mal adaptés pour la transmission d'un contenu hiérarchisé. On propose alors un algorithme de précodage allouant successivement la puissance sur les sous-canaux SISO afin de maximiser la qualité des images reçues. Le précodeur proposé permet d'atteindre un TEB cible compte tenu ducodage canal, de la modulation et du SNR des sous-canaux SISO. A partir de cet algorithme de précodage, on propose une solution d'adaptation de lien permettant de régler dynamiquement les paramètres de la chaîne en fonction des variations sur le canal de transmission. Cette solution détermine la configuration de codage/transmission maximisant la qualité de l'image en réception. Enfin, on présente une étude sur la prise en compte de contraintes psychovisuelles dans l'appréciation de la qualité des images reçues. On propose ainsi l'intégration d'une métrique à référence réduite basée sur des contraintes psychovisuelles permettant d'assister le décodeur vers la configuration de décodage offrant la meilleure qualité d'expérience. Des tests subjectifs confirment l'intérêt de l'approche proposée<br>This thesis presents a transmission strategy for exploiting the spatial diversity for image transmission over wireless channel. We propose an original approach based on the matching between the source hierarchy and the SISO sub-channels hierarchy, resulting from the MIMO channel decomposition. We evaluate common precoder performance in the context of this strategy via a realistic physical layer respecting the IEEE802.11n standard and associated with a transmission channel based on a 3D-ray tracer propagation model. It is shown that common precoders are not adapted for the transmission of a hierarchical content. Then, we propose a precoding algorithm which successively allocates power over SISO subchannels in order to maximize the received images quality. The proposed precoder achieves a target BER according to the channel coding, the modulation and the SISO subchannels SNR. From this precoding algorithm, we propose a link adaptation scheme to dynamically adjust the system parameters depending on the variations of the transmission channel. This solution determines the optimal coding/transmission configuration maximizing the image quality in reception. Finally, we present a study for take into account some psychovisual constraints in the assessment of the received images quality. We propose the insertion of a reduced reference metric based on psychovisual constraints, to assist the decoder in order to determine the decoding configuration providing the highest quality of experience. Subjective tests confirm the interest of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "JPEG 2000 Compression"

1

Ebrahimi, Touradj, Peter Schelkens, and Athanassios Skodras. JPEG 2000 Suite. Wiley & Sons, Limited, John, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ebrahimi, Touradj, Peter Schelkens, and Athanassios Skodras. JPEG 2000 Suite. Wiley & Sons, Incorporated, John, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ebrahimi, Touradj, Peter Schelkens, and Athanassios Skodras. JPEG 2000 Suite. Wiley & Sons, Incorporated, John, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

JPEG 2000 Compression of Direct Digital Images: Effects on the Detection of Periapical Radiolucencies and Perceived Image Quality. Storming Media, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "JPEG 2000 Compression"

1

Fournier, Régis, and Amine Naït-ali. "Multimodal Compression Using JPEG 2000: Supervised Insertion Approach." In Signal and Image Multiresolution Analysis. John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118568767.ch3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Starosolski, Roman. "Human Visual System Inspired Color Space Transform in Lossy JPEG 2000 and JPEG XR Compression." In Beyond Databases, Architectures and Structures. Towards Efficient Solutions for Data Analysis and Knowledge Representation. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58274-0_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Růžička, Jan, and Kateřina Růžičková. "Impact of GDAL JPEG 2000 Lossy Compression to a Digital Elevation Model." In Lecture Notes in Geoinformation and Cartography. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18407-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dan, Wang. "Comparative Study of Near-Lossless Compression by JPEG XR, JPEG 2000, and H.264 on 4K Video Sequences." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45498-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Figueroa-Villanueva, Miguel A., Nalini K. Ratha, and Ruud M. Bolle. "A Comparative Performance Analysis of JPEG 2000 vs. WSQ for Fingerprint Image Compression." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44887-x_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goebel, Peter Michael, Ahmed Nabil Belbachir, and Michael Truppe. "A Study on the Influence of Image Dynamics and Noise on the JPEG 2000 Compression Performance for Medical Images." In Computer Vision Approaches to Medical Image Analysis. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11889762_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wenzel, Christoph, Patrick Vogler, Johannes M. F. Peter, Markus J. Kloker, and Ulrich Rist. "Application of a JPEG 2000-Based Data Compression Algorithm to DNS of Compressible Turbulent Boundary Layers Up to $$Re_\theta =6600$$." In High Performance Computing in Science and Engineering '20. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80602-6_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Choi, Jinyoung, and Bohyung Han. "Task-Aware Quantization Network for JPEG Image Compression." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58565-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Falcón-Ruiz, A., J. E. Paz-Viera, A. Taboada-Crispí, and H. Sahli. "Automatic Bound Estimation for JPEG 2000 Compressing Leukocytes Images." In V Latin American Congress on Biomedical Engineering CLAIB 2011 May 16-21, 2011, Habana, Cuba. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-21198-0_140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Starosolski, Roman. "A Practical Application of Skipped Steps DWT in JPEG 2000 Part 2-Compliant Compressor." In Beyond Databases, Architectures and Structures. Facing the Challenges of Data Proliferation and Growing Variety. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99987-6_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "JPEG 2000 Compression"

1

Jimenez-Rodriguez, L., F. Auli-Llinas, M. W. Marcellin, and J. Serra-Sagrista. "Visually Lossless JPEG 2000 Decoder." In 2013 Data Compression Conference (DCC). IEEE, 2013. http://dx.doi.org/10.1109/dcc.2013.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Foos, David H., Edward Muka, Richard M. Slone, et al. "JPEG 2000 compression of medical imagery." In Medical Imaging 2000, edited by G. James Blaine and Eliot L. Siegel. SPIE, 2000. http://dx.doi.org/10.1117/12.386390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Gene K., Michael J. Gormish, and Martin P. Boliek. "New compression paradigms in JPEG 2000." In International Symposium on Optical Science and Technology, edited by Andrew G. Tescher. SPIE, 2000. http://dx.doi.org/10.1117/12.411562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Richter, Thomas, and Kil Joong Kim. "A MS-SSIM Optimal JPEG 2000 Encoder." In 2009 Data Compression Conference (DCC). IEEE, 2009. http://dx.doi.org/10.1109/dcc.2009.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Richter, Thomas. "Compressing JPEG 2000 JPIP Cache State Information." In 2012 Data Compression Conference (DCC). IEEE, 2012. http://dx.doi.org/10.1109/dcc.2012.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Barina, David, Ondrej Klima, and Pavel Zemcik. "Single-Loop Software Architecture for JPEG 2000." In 2016 Data Compression Conference (DCC). IEEE, 2016. http://dx.doi.org/10.1109/dcc.2016.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barbero, Jesus M. "Lossy Raid Storage Architecture for JPEG 2000 Images." In 2011 Data Compression Conference (DCC). IEEE, 2011. http://dx.doi.org/10.1109/dcc.2011.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barbero, Jesús M., Eugenio Santos, and Abraham Gutiérrez. "Dual Contribution of JPEG 2000 Images for Unidirectional Links." In 2010 Data Compression Conference. IEEE, 2010. http://dx.doi.org/10.1109/dcc.2010.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wilkinson, Timothy S., James H. Kasner, Bernard V. Brower, and Sylvia S. Shen. "Multicomponent compression in JPEG 2000 Part II." In International Symposium on Optical Science and Technology, edited by Andrew G. Tescher. SPIE, 2001. http://dx.doi.org/10.1117/12.449755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sharpe II, Louis H., and Basil Manns. "JPEG 2000 options for document image compression." In Electronic Imaging 2002, edited by Paul B. Kantor, Tapas Kanungo, and Jiangying Zhou. SPIE, 2001. http://dx.doi.org/10.1117/12.450725.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "JPEG 2000 Compression"

1

Orandi, Shahram, John M. Libert, John D. Grantham, Kenneth Ko, Stephen S. Wood, and Jin Chu Wu. Effects of JPEG 2000 image compression on 1000 ppi fingerprint imagery. National Institute of Standards and Technology, 2011. http://dx.doi.org/10.6028/nist.ir.7778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brislawn, Christopher M. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1048835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Orandi, Shahram, John M. Libert, John D. Grantham, Frederick R. Byers, Lindsay M. Petersen, and Michael D. Garris. Effects of JPEG 2000 Lossy Image Compression on 1000 ppi Latent Fingerprint Casework. National Institute of Standards and Technology, 2013. http://dx.doi.org/10.6028/nist.ir.7780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Libert, John M., Shahram Orandi, and John D. Grantham. Comparison of the WSQ and JPEG 2000 image compression algorithms on 500 ppi fingerprint imagery. National Institute of Standards and Technology, 2012. http://dx.doi.org/10.6028/nist.ir.7781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Libert, John M., Shahram Orandi, Michael D. Garris, and John D. Grantham. Effects of Decomposition Levels and Quality Layers with JPEG 2000 Compression of 1000 ppi Fingerprint Images. National Institute of Standards and Technology, 2013. http://dx.doi.org/10.6028/nist.ir.7939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography