Academic literature on the topic 'Peak Signal Noise Rate (PSNR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Peak Signal Noise Rate (PSNR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Peak Signal Noise Rate (PSNR)"

1

Cengiz, Enes, Muhammed Mustafa Kelek, Yüksel Oğuz, and Cemal Yılmaz. "Classification of breast cancer with deep learning from noisy images using wavelet transform." Biomedical Engineering / Biomedizinische Technik 67, no. 2 (2022): 143–50. http://dx.doi.org/10.1515/bmt-2021-0163.

Full text
Abstract:
Abstract In this study, breast cancer classification as benign or malignant was made using images obtained by histopathological procedures, one of the medical imaging techniques. First of all, different noise types and several intensities were added to the images in the used data set. Then, the noise in images was removed by applying the Wavelet Transform (WT) process to noisy images. The performance rates in the denoising process were found out by evaluating Peak Signal to Noise Rate (PSNR) values of the images. The Gaussian noise type gave better results than other noise types considering PSNR values. The best PSNR values were carried out with the Gaussian noise type. After that, the denoised images were classified by Convolution Neural Network (CNN), one of the deep learning techniques. In this classification process, the proposed CNN model and the VggNet-16 model were used. According to the classification result, better results were obtained with the proposed CNN model than VggNet-16. The best performance (86.9%) was obtained from the data set created Gaussian noise with 0.3 noise intensity.
APA, Harvard, Vancouver, ISO, and other styles
2

Yunus, Mahmuddin, and Agus Harjoko. "Penyembunyian Data pada File Video Menggunakan Metode LSB dan DCT." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 8, no. 1 (2014): 81. http://dx.doi.org/10.22146/ijccs.3498.

Full text
Abstract:
AbstrakPenyembunyian data pada file video dikenal dengan istilah steganografi video. Metode steganografi yang dikenal diantaranya metode Least Significant Bit (LSB) dan Discrete Cosine Transform (DCT). Dalam penelitian ini dilakukan penyembunyian data pada file video dengan menggunakan metode LSB, metode DCT, dan gabungan metode LSB-DCT. Sedangkan kualitas file video yang dihasilkan setelah penyisipan dihitung dengan menggunakan Mean Square Error (MSE) dan Peak Signal to Noise Ratio (PSNR).Uji eksperimen dilakukan berdasarkan ukuran file video, ukuran file berkas rahasia yang disisipkan, dan resolusi video.Hasil pengujian menunjukkan tingkat keberhasilan steganografi video dengan menggunakan metode LSB adalah 38%, metode DCT adalah 90%, dan gabungan metode LSB-DCT adalah 64%. Sedangkan hasil perhitungan MSE, nilai MSE metode DCT paling rendah dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan metode LSB-DCT mempunyai nilai yang lebih kecil dibandingkan metode LSB. Pada pengujian PSNR diperoleh databahwa nilai PSNR metode DCTlebih tinggi dibandingkan metode LSB dan gabungan metode LSB-DCT. Sedangkan nilai PSNR metode gabungan LSB-DCT lebih tinggi dibandingkan metode LSB. Kata Kunci—Steganografi, Video, Least Significant Bit (LSB), Discrete Cosine Transform (DCT), Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) AbstractHiding data in video files is known as video steganography. Some of the well known steganography methods areLeast Significant Bit (LSB) and Discrete Cosine Transform (DCT) method. In this research, data will be hidden on the video file with LSB method, DCT method, and the combined method of LSB-DCT. While the quality result of video file after insertion is calculated using the Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The experiments were conducted based on the size of the video file, the file size of the inserted secret files, and video resolution.The test results showed that the success rate of the video steganography using LSB method was 38%, DCT method was 90%, and the combined method of LSB-DCT was 64%. While the calculation of MSE, the MSE method DCT lower than the combined method of LSB and LSB-DCT method. While LSB-DCT method has asmaller value than the LSB method. The PNSR experiment showed that the DCT method PSNR value is higher than the combined method of LSB and LSB-DCT method. While PSNR combined method LSB-DCT higher compared LSB method. Keywords—Steganography, Video, Least Significant Bit (LSB), Discrete Cosine Transform (DCT), Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR)
APA, Harvard, Vancouver, ISO, and other styles
3

Tyukhtyaev, Dmitry. "Researching Video Conference Services on IEEE 802.11x Wireless Networks." NBI Technologies, no. 4 (December 2021): 13–18. http://dx.doi.org/10.15688/nbit.jvolsu.2021.4.2.

Full text
Abstract:
The purpose of the study was to determine the dependence of the quality of video conferencing services on the characteristics of wireless communication channels and the number of users in a given network. The characteristics of the signal strength in a wireless network, measured in decibels (dB) were described in this article. The article discusses subjective and objective methods for assessing video. The PSNR and VQM metrics and the MSU Video Quality Measurement Tool software, created by the computer graphics laboratory of the Moscow State University, were used as an objective method for assessing video. For the subjective method, the DSCQS method was used. The PSNR (peak signal to noise ratio) metric is one of the most commonly used metrics. PSNR measures the peak signal-to-noise ratio between the original signal and the signal at the output of the system. PSNR does not measure all video-specific parameters, as the fidelity of the image is constantly changing depending on the visual complexity of the image, the available bit rate and even the compression method. The Video Quality Measurement (VQM) metric is described in Recommendation ITU-R BT.1683. The test results show that VQM has a high correlation with subjective methods for assessing video quality and claims to become the standard in the field of objective quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
4

Youssif, Mohamed Ibrahim, Amr ElSayed Emam, and Mohamed Abd ElGhany. "Image multiplexing using residue number system coding over MIMO-OFDM communication system." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 6 (2019): 4815. http://dx.doi.org/10.11591/ijece.v9i6.pp4815-4825.

Full text
Abstract:
<p>Image transmission over Orthogonal Frequency-Division Multiplexing (OFDM) communication system is prone to distortion and noise due to the encountered High-Peak-to-Average-Power-Ratio (PAPR) generated from the OFDM block. This paper studies the utilization of Residue Number System (RNS) as a coding scheme for digital image transmission over Multiple-Input-Multiple-Output (MIMO) – OFDM transceiver communication system. The use of the independent parallel feature of RNS, as well as the reduced signal amplitude to convert the input signal to parallel smaller residue signals, enable to reduce the signal PAPR, decreasing the signal distortion and the Bit Error Rate (BER). Consequently, improving the received Signal-to-Noise Ratio (SNR) and enhancing the received image quality. The performance analyzed though BER, and PAPR. Moreover, image quality measurement is achieved through evaluating the Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), and the correlation values between the initial and retrieved images. Simulation results had shown the performance of transmission/reception model with and without RNS coding implementation.</p><p> </p>
APA, Harvard, Vancouver, ISO, and other styles
5

M., I. Youssef, E. Emam A., and Abd Elghany M. "Image multiplexing using residue number system coding over MIMO-OFDM communication system." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 6 (2019): 4815–25. https://doi.org/10.11591/ijece.v9i6.pp4815-4825.

Full text
Abstract:
Image transmission over Orthogonal Frequency-Division Multiplexing (OFDM) communication system is prone to distortion and noise due to the encountered High-Peak-to-Average-Power-Ratio (PAPR) generated from the OFDM block. This paper studies the utilization of Residue Number System (RNS) as a coding scheme for digital image transmission over Multiple Input Multiple Output (MIMO) – OFDM transceiver communication system. The use of the independent parallel feature of RNS, as well as the reduced signal amplitude to convert the input signal to parallel smaller residue signals, enable to reduce the signal PAPR, decreasing the signal distortion and the Bit Error Rate (BER). Consequently, improving the received Signal-to-Noise Ratio (SNR) and enhancing the received image quality. The performance analyzed though BER, and PAPR. Moreover, image quality measurement is achieved through evaluating the Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), and the correlation values between the initial and retrieved images. Simulation results had shown the performance of transmission/reception model with and without RNS coding implementation.
APA, Harvard, Vancouver, ISO, and other styles
6

V, Malathi, and Gopinath MP. "Noise Deduction in Novel Paddy Data Repository using Filtering Techniques." Scalable Computing: Practice and Experience 21, no. 4 (2020): 601–10. http://dx.doi.org/10.12694/scpe.v21i4.1718.

Full text
Abstract:
Classification of paddy crop diseases in prior knowledge is the current challenging task to evolve the economicgrowth of the country. In image processing techniques, the initial process is to eliminate the noise present in the dataset. Removing the noise leads to improvements in the quality of the image. Noise can be removed by applying filtering techniques. In this paper, a novel data repository created from different paddy areas in Vellore, which includes the following diseases, namely Bacteria Leaf Blight, Blast, Leaf Spot, Leaf Holder, Hispa and Healthy leaves. In the initial process, three kinds of noises, namely Salt and Pepper noise, Speckle noise, and Poisson noises, were removed using noise filtering techniques, namely Median and Wiener filter. Theinterpretation made over the median and Wiener filtering techniques concerning noises, the performance of the methods measured using metrics namely PSNR (peak to signal to noise ration), MSE (mean square error), Maxerr (Maximum squared error), L2rat (ratio of squared error). It is observed that the PSNR value of the hybrid approach is 18.42dB, which produces less error rate as compared with the traditional approach. Results suggest that the methods used in this paper are suitable for processing noise.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Wanting, Hui Chen, Yuan Yuan, Sheng Luo, Huaibin Zheng, and Xiangan Yan. "High-fidelity sub-Nyquist ghost imaging with tri-directional probing." Journal of Applied Physics 131, no. 10 (2022): 103101. http://dx.doi.org/10.1063/5.0082828.

Full text
Abstract:
Ghost imaging is an unconventional imaging method, which has invoked many applications in various fields. However, it is still a major challenge to achieve high-fidelity high-resolution images at a sub-Nyquist sampling rate. Here, we present a ghost imaging method that illuminates an object with three directional Tetris-like patterns, which can greatly trade off the contradiction between the high resolution and high detection signal-to-noise ratio. As the projected patterns gradually shrink during the detection, the image is also gradually recovered from low to high resolution. In addition, this method can recover complex chromatic objects without any compromising image quality by adaptively abandoning unnecessary patterns at sampling rates well below the Nyquist limit. Meanwhile, the dynamic probing scheme has an excellent noise-removal capability. The simulation and experiment demonstrate that the sampling rate to recover a high-fidelity image is only [Formula: see text] for a scene of a [Formula: see text] duty cycle. For a very noisy scene whose peak signal–noise rate (PSNR) is 10.18 dB [the structural similarity index (SSIM) is 0.068], this scheme increases the PSNR to 18.63 dB [structural similarity index (SSIM) to 0.73]. Therefore, the proposed method may be useful for ghost imaging in the low sampling rate regime or complex chromatic objects reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
8

Qaswaa K. Abood. "Wavelet -Based Modified Intermediate Significant Bit Insertion for Text Steganography." Journal of the College of Basic Education 17, no. 70 (2022): 165–76. http://dx.doi.org/10.35950/cbej.vi.8486.

Full text
Abstract:
In this paper, we propose and investigate the use of a modifiedversion of Intermediate Significant Bit (ISB) insertion algorithm, so-called here as MISB algorithm for secret text steganography. The secret text is passed through a sequence of ciphering techniques. A Modified Intermediate Significant Bit MISB insertion algorithm is proposed for hiding the bit sequences of the cipher text into the time-frequency domain using wavelet transform of the image pixels, in order to improve the robustness of the Steganography system. Results demonstrate the effectiveness of wavelet Modified Intermediate Significant Bit WMISB while evaluating its performance using Peak Signal to Noise Ratio PSNR measure. Moreover, additional experiments are evaluated for noising, and image compression for cover images to compare their Peak Signal to Noise Ratio PSNR. The drawn results confirm the effectiveness of the proposed Wavelet-based Modified Intermediate Significant Bit Insertion WMISB, Also, Bit Error Rate BER, is a key parameter that is used in assessing systems that transmit digital data from one location to another.
APA, Harvard, Vancouver, ISO, and other styles
9

Rahul, Nagraj, and Goud Myadaboyina Srinidhi. "Digital Signal Processing Algorithms for Noise Reduction in Wireless Image Transmission Systems." Sarcouncil Journal of Engineering and Computer Sciences 4, no. 3 (2025): 9–16. https://doi.org/10.5281/zenodo.15047488.

Full text
Abstract:
Wireless Image Transmission Systems (WITS) are highly susceptible to noise distortions, leading to degraded image quality and increased transmission errors. This study evaluates the effectiveness of Digital Signal Processing (DSP) algorithms, Kalman filtering, wavelet-based denoising, Wiener filtering, and median filtering for noise reduction in WITS under varying Signal-to-Noise Ratio (SNR) levels (5 dB to 20 dB). Performance is assessed using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Bit Error Rate (BER). The results indicate that Kalman filtering achieves the highest PSNR (33.2 dB) and SSIM (0.96), along with the lowest BER (0.005) at SNR = 20 dB, making it the most effective method for noise suppression. Wavelet-based denoising emerges as a computationally efficient alternative, offering a balance between image quality and processing speed. Statistical analyses, including ANOVA and paired t-tests, confirm significant differences (p < 0.001) in performance among the algorithms. The findings suggest that while Kalman filtering provides superior noise reduction, wavelet-based denoising is more suitable for real-time applications. Future research should explore hybrid DSP techniques and deep learning-based models for enhanced noise suppression in wireless imaging
APA, Harvard, Vancouver, ISO, and other styles
10

Lima, Regiano Karunia. "ANALISIS PERBANDINGAN REDUKSI NOISE MENGGUNAKAN METODE MEAN, MEDIAN DAN CONTRA-HARMONIC MEAN FILTERING PADA CITRA GRAYSCALE POLA TENUNAN DAERAH PROVINSI NUSA TENGGARA TIMUR." Jurnal Elektro dan Telekomunikasi Terapan 11, no. 1 (2024): 9–15. https://doi.org/10.25124/jett.v11i1.6827.

Full text
Abstract:
Abstrak Baik atau tidaknya kualitas dari suatu citra digital dapat dipengaruhi oleh beberapa aspek seperti tingkat ketajaman, kepekaan warna dan banyak atau tidaknya derau atau noise. Pengolahan citra diperlukan untuk menghasilkan citra digital dengan kualitas yang lebih baik. Penelitian ini bertujuan untuk mengimplementasikan penggunaan metode Mean filtering, Median filtering dan Contra-Harmonic Mean filtering untuk mereduksi Gaussian, Speckle dan Salt & Pepper noise pada citra grayscale pola tenunan beberapa daerah di Provinsi Nusa Tenggara Timur. Pengujian dilakukan dengan menganalisa rata-rata nilai Mean square error (MSE) dan Peak Signal to Noise Ratio (PSNR) disertai dengan pengamatan visual. Nilai MSE dan PSNR berbanding terbalik yang mana citra yang baik memiliki nilai MSE yang paling kecil dan PSNR yang paling besar. Hasil penelitian ini menunjukan bahwa untuk mereduksi gaussian noise ketiga metode memperoleh hasil yang baik dengan nilai PSNR di kisaran 67 dB, untuk mereduksi speckle noise metode Mean filtering dan Contra-Harmonic Mean filtering lebih unggul bila dibandingkan dengan metode Median filtering dengan nilai PSNR di kisaran 77 dB dan untuk mereduksi Salt & pepper noise metode Median filtering lebih unggul bila dibandingkan dengan metode Mean filtering dan Contra-Harmonic Mean filtering dengan nilai PSNR sebesar 86.2846 dB. Kata kunci : mean filtering, median filtering, contra-harmonic mean filtering, gaussian noise, speckle noise, salt & pepper noise
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Peak Signal Noise Rate (PSNR)"

1

Dandu, Sai Venkata Satya Siva Kumar, and Sujit Kadimisetti. "2D SPECTRAL SUBTRACTION FOR NOISE SUPPRESSION IN FINGERPRINT IMAGES." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13848.

Full text
Abstract:
Human fingerprints are rich in details called the minutiae, which can be used as identification marks for fingerprint verification. To get the details, the fingerprint capturing techniques are to be improved. Since when we the fingerprint is captured, the noise from outside adds to it. The goal of this thesis is to remove the noise present in the fingerprint image. To achieve a good quality fingerprint image, this noise has to be removed or suppressed and here it is done by using an algorithm or technique called ’Spectral Subtraction’, where the algorithm is based on subtraction of estimated noise spectrum from noisy signal spectrum. The performance of the algorithm is assessed by comparing the original fingerprint image and image obtained after spectral subtraction several parameters like PSNR, SSIM and also for different fingerprints on the database. Finally, performance matching was done using NIST matching software, and the obtained results were presented in the form of Receiver Operating Characteristics (ROC)graphs, using MATLAB, and the experimental results were presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Mendes, Valenzuela Gracieth. "Mecanismo de seleção de rede em ambientes heterogêneos baseado em qualidade de experiência (qoe)." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/2728.

Full text
Abstract:
Made available in DSpace on 2014-06-12T16:00:39Z (GMT). No. of bitstreams: 2 arquivo6834_1.pdf: 2552234 bytes, checksum: ff18190fd071f2e26f6470e29f084e5a (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011<br>Conselho Nacional de Desenvolvimento Científico e Tecnológico<br>Com a crescente popularidade das redes sem fio e as tecnologias Wi-Fi (Wireless Fidelity), WiMAX (Worldwide Interoperability for Microwave Access), surgiu a necessidade de se promover uma convergência entre elas, visando oferecer ao usuário diversas oportunidades de conectividade e a possibilidade de estabelecer uma comunicação sem interrupções. Neste contexto, o trabalho tem como objetivo propor um mecanismo de seleção de rede baseado na qualidade de experiência do usuário (QoE - Quality of Experience). A partir dos resultados coletados nas simulações realizadas no Network Simulator (ns-2), foi construída uma base de dados com o histórico de valores obtidos com a métrica de Relação Sinal/Ruído (PSNR Peak Signal Noise Ratio), utilizada com o auxilio do protocolo IEEE 802.21 para a execução da tomada de decisão de handover. Dentre as contribuições deste trabalho, a principal consiste na seleção de redes heterogêneas em que a decisão de handover considera a percepção do usuário, com uso da métrica de QoE. Os resultados demonstram que a decisão de handover baseada em QoE reflete no aumento de 15 % na qualidade do vídeo
APA, Harvard, Vancouver, ISO, and other styles
3

CRUZ, Hugo Alexandre Oliveira da. "Metodologia de predição de perda de propagação e qualidade de vídeo em redes sem fio indoor por meio de redes neurais artificiais." Universidade Federal do Pará, 2018. http://repositorio.ufpa.br/jspui/handle/2011/10029.

Full text
Abstract:
Submitted by Kelren Mota (kelrenlima@ufpa.br) on 2018-06-14T18:39:25Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_MetodologiaPredicaoPerda.pdf: 3699343 bytes, checksum: 3b43522af593666187f8aef07927421f (MD5)<br>Approved for entry into archive by Kelren Mota (kelrenlima@ufpa.br) on 2018-06-14T18:39:41Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_MetodologiaPredicaoPerda.pdf: 3699343 bytes, checksum: 3b43522af593666187f8aef07927421f (MD5)<br>Made available in DSpace on 2018-06-14T18:39:41Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_MetodologiaPredicaoPerda.pdf: 3699343 bytes, checksum: 3b43522af593666187f8aef07927421f (MD5) Previous issue date: 2018-02-27<br>Esta dissertação apresenta uma metodologia que visa auxiliar o planejamento de sistemas de redes sem fio indoor, que requerem o conhecimento prévio dos ambientes nos quais serão implantados. Assim, é necessário precisão na análise do sinal por meio de uma abordagem empírica estatística, que leva em consideração alguns fatores que influenciam na propagação do sinal indoor: arquitetura dos prédios; disposição de móveis no interior dos compartimentos; números de paredes e pisos de diversos materiais, além do espalhamento das ondas de rádio. A metodologia adotada é baseada em medições com uma abordagem cross-layer, que demonstra o impacto da camada física em relação à camada de aplicação, com o objetivo de prever o comportamento da métrica de Qualidade de Experiência (QoE), chamada de Peak signal-to-noise ratio (PSNR), em transmissões de vídeo em 4k em redes sem fio 802.11ac, no ambiente indoor. Para tanto, foram realizadas medições, que demonstram como o sinal/vídeo se degrada no ambiente estudado, sendo possível modelar esta degradação por meio de uma técnica de inteligência computacional, chamada Redes Neurais Artificiais (RNA), na qual são inseridos parâmetros de entrada como, por exemplo, a distância do transmissor ao receptor e o número de paredes atravessadas a fim de predizer perda de propagação e perda de PSNR. Para avaliar a capacidade de predição dos métodos propostos, foram obtidos os valores dos erros Root Mean Sqare (RMS) entre os dados medidos e os preditos, pelo os métodos de predição perda de propagação e perda de PSNR, sendo os valores respectivos 2,17 dB e 2,81 dB.<br>This dissertation presents a methodology that aims to assist the planning of indoor wireless network systems, which require prior knowledge of the environments in which they will be deployed. Thus, accurate signal analysis is necessary by means of a statistical empirical approach, which takes into account some factors that influence the propagation of the indoor signal: architecture of the buildings; arrangement of furniture inside the compartments; numbers of walls and floors of various materials, and the spread of radio waves. The methodology adopted is based on measurements with a cross-layer approach, which demonstrates the impact of the physical layer in relation to the application layer, in order to predict the behavior of the Quality of Experience (QoE) metric, called Peak signal- to-noise ratio (PSNR), in 4K video streams on 802.11ac wireless networks in the indoor environment. In order to do so, measurements were performed, which demonstrate how the signal / video degrades in the studied environment. It is possible to model this degradation by means of a computational intelligence technique, called Artificial Neural Networks (RNA), in which input parameters are inserted as, for example, the distance from the transmitter to the receiver and the number of walls crossed in order to predict loss of propagation and loss of PSNR. In order to evaluate the predictive capacity of the proposed methods, the values of the Root Mean Sqare (RMS) errors between the measured and predicted data were obtained by the prediction methods loss of propagation and loss of PSNR, with respective values of 2.17 dB and 2.81 dB.
APA, Harvard, Vancouver, ISO, and other styles
4

Vergütz, Stéphany. "Uma combinação entre os critérios objetivo e subjetivo na classificação de imagens mamográficas comprimidas pelo método fractal." Universidade Federal de Uberlândia, 2013. https://repositorio.ufu.br/handle/123456789/14568.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Images are relevant sources of information in many areas of science and technology. The processing of such information improves and optimizes its use. The image compression causes the information representation is more efficient, reducing the amount of data required to represent an image. The objective of this study is to evaluate the performance of Fractal Compression technique onto mammograms through an association between the objective criteria, provided by Peak Signal Noise Ration (PSNR); and the subjective criteria, given by visual analysis of an expert physician. Visual analysis was performed comparing mammograms compressed to different extents (compression rate) with the original image, where experts classified the compressed images as unacceptable , acceptable , good or great . In doing so, the optimal compression rate and PSNR values of mammograms was achieved, where images are considered acceptable according to experts. In order to compare the performance of fractal compression technique with another compression method, visual analysis was also done on images compressed by JPEG2000 method.<br>As imagens são fontes relevantes de informação em diversas áreas da ciência e tecnologia. O processamento dessas informações melhora e otimiza sua utilização. A compressão de imagens faz com que a representação da informação seja mais eficiente, reduzindo a quantidade de dados necessários para representar uma imagem. O objetivo deste trabalho é apresentar a avaliação do desempenho da compressão fractal aplicada a imagens mamográficas, pela combinação entre o critério objetivo, fornecido pela relação sinal ruído de pico (Peak Signal Noise Ratio - PSNR), e o critério subjetivo, especificado pela análise visual de médicos especialistas. A análise visual foi realizada comparando as imagens mamográficas comprimidas com diferentes taxas de compressão e a imagem original. Os especialistas classificaram as imagens comprimidas como \"inaceitável\", \"aceitável\", \"boa\" ou \"ótima\". Dessa maneira, conseguiu-se combinar a taxa de compressão e o valor de PSNR, para que as imagens comprimidas sejam consideradas aceitáveis pelos especialistas. Para avaliar o desempenho da compressão fractal foram realizados testes e análises visuais com as mesmas imagens utilizando o método de compressão JPEG2000.<br>Mestre em Ciências
APA, Harvard, Vancouver, ISO, and other styles
5

Belda, Ortega Román. "Mejora del streaming de vídeo en DASH con codificación de bitrate variable mediante el algoritmo Look Ahead y mecanismos de coordinación para la reproducción, y propuesta de nuevas métricas para la evaluación de la QoE." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/169467.

Full text
Abstract:
[ES] Esta tesis presenta diversas propuestas encaminadas a mejorar la transmisión de vídeo a través del estándar DASH (Dynamic Adaptive Streaming over HTTP). Este trabajo de investigación estudia el protocolo de transmisión DASH y sus características. A la vez, plantea la codificación con calidad constante y bitrate variable como modo de codificación del contenido de vídeo más indicado para la transmisión de contenido bajo demanda mediante el estándar DASH. Derivado de la propuesta de utilización del modo de codificación de calidad constante, cobra mayor importancia el papel que juegan los algoritmos de adaptación en la experiencia de los usuarios al consumir el contenido multimedia. En este sentido, esta tesis presenta un algoritmo de adaptación denominado Look Ahead el cual, sin modificar el estándar, permite utilizar la información de los tamaños de los segmentos de vídeo incluida en los contenedores multimedia para evitar tomar decisiones de adaptación que desemboquen en paradas no deseadas en la reproducción de contenido multimedia. Con el objetivo de evaluar las posibles mejoras del algoritmo de adaptación presentado, se proponen tres modelos de evaluación objetiva de la QoE. Los modelos propuestos permiten predecir de forma sencilla la QoE que tendrían los usuarios de forma objetiva, utilizando parámetros conocidos como el bitrate medio, el PSNR (Peak Signal-to-Noise Ratio) y el valor de VMAF (Video Multimethod Assessment Fusion). Todos ellos aplicados a cada segmento. Finalmente, se estudia el comportamiento de DASH en entornos Wi-Fi con alta densidad de usuarios. En este contexto, se producen un número elevado de paradas en la reproducción por una mala estimación de la tasa de transferencia disponible debida al patrón ON/OFF de descarga de DASH y a la variabilidad del acceso al medio de Wi-Fi. Para paliar esta situación, se propone un servicio de coordinación basado en la tecnología SAND (MPEG's Server and Network Assisted DASH) que proporciona una estimación de la tasa de transferencia basada en la información del estado de los players de los clientes.<br>[CA] Aquesta tesi presenta diverses propostes encaminades a millorar la transmissió de vídeo a través de l'estàndard DASH (Dynamic Adaptive Streaming over HTTP). Aquest treball de recerca estudia el protocol de transmissió DASH i les seves característiques. Alhora, planteja la codificació amb qualitat constant i bitrate variable com a manera de codificació del contingut de vídeo més indicada per a la transmissió de contingut sota demanda mitjançant l'estàndard DASH. Derivat de la proposta d'utilització de la manera de codificació de qualitat constant, cobra major importància el paper que juguen els algorismes d'adaptació en l'experiència dels usuaris en consumir el contingut. En aquest sentit, aquesta tesi presenta un algoritme d'adaptació denominat Look Ahead el qual, sense modificar l'estàndard, permet utilitzar la informació de les grandàries dels segments de vídeo inclosa en els contenidors multimèdia per a evitar prendre decisions d'adaptació que desemboquin en una parada indesitjada en la reproducció de contingut multimèdia. Amb l'objectiu d'avaluar les possibles millores de l'algoritme d'adaptació presentat, es proposen tres models d'avaluació objectiva de la QoE. Els models proposats permeten predir de manera senzilla la QoE que tindrien els usuaris de manera objectiva, utilitzant paràmetres coneguts com el bitrate mitjà, el PSNR (Peak Signal-to-Noise Ratio) i el valor de VMAF (Video Multimethod Assessment Fusion). Tots ells aplicats a cada segment. Finalment, s'estudia el comportament de DASH en entorns Wi-Fi amb alta densitat d'usuaris. En aquest context es produeixen un nombre elevat de parades en la reproducció per una mala estimació de la taxa de transferència disponible deguda al patró ON/OFF de descàrrega de DASH i a la variabilitat de l'accés al mitjà de Wi-Fi. Per a pal·liar aquesta situació, es proposa un servei de coordinació basat en la tecnologia SAND (MPEG's Server and Network Assisted DASH) que proporciona una estimació de la taxa de transferència basada en la informació de l'estat dels players dels clients.<br>[EN] This thesis presents several proposals aimed at improving video transmission through the DASH (Dynamic Adaptive Streaming over HTTP) standard. This research work studies the DASH transmission protocol and its characteristics. At the same time, this work proposes the use of encoding with constant quality and variable bitrate as the most suitable video content encoding mode for on-demand content transmission through the DASH standard. Based on the proposal to use the constant quality encoding mode, the role played by adaptation algorithms in the user experience when consuming multimedia content becomes more important. In this sense, this thesis presents an adaptation algorithm called Look Ahead which, without modifying the standard, allows the use of the information on the sizes of the video segments included in the multimedia containers to avoid making adaptation decisions that lead to undesirable stalls during the playback of multimedia content. In order to evaluate the improvements of the presented adaptation algorithm, three models of objective QoE evaluation are proposed. These models allow to predict in a simple way the QoE that users would have in an objective way, using well-known parameters such as the average bitrate, the PSNR (Peak Signal-to-Noise Ratio) and the VMAF (Video Multimethod Assessment Fusion). All of them applied to each segment. Finally, the DASH behavior in Wi-Fi environments with high user density is analyzed. In this context, there could be a high number of stalls in the playback because of a bad estimation of the available transfer rate due to the ON/OFF pattern of DASH download and to the variability of the access to the Wi-Fi environment. To relieve this situation, a coordination service based on SAND (MPEG's Server and Network Assisted DASH) is proposed, which provides an estimation of the transfer rate based on the information of the state of the clients' players.<br>Belda Ortega, R. (2021). Mejora del streaming de vídeo en DASH con codificación de bitrate variable mediante el algoritmo Look Ahead y mecanismos de coordinación para la reproducción, y propuesta de nuevas métricas para la evaluación de la QoE [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/169467<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
6

Dizon, Lucas, and Martin Johansson. "Atrial Fibrillation Detection Algorithm Evaluation and Implementation in Java." Thesis, KTH, Skolan för teknik och hälsa (STH), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-158878.

Full text
Abstract:
Atrial fibrillation is a common heart arrhythmia which is characterized by a missing or irregular contraction of the atria. The disease is a risk factor for other more serious diseases and the total medical costs in society are extensive. Therefore it would be beneficial to improve and optimize the prevention and detection of the disease.   Pulse palpation and heart auscultation can facilitate the detection of atrial fibrillation clinically, but the diagnosis is generally confirmed by an ECG examination. Today there are several algorithms that detect atrial fibrillation by analysing an ECG. A common method is to study the heart rate variability (HRV) and by different types of statistical calculations find episodes of atrial fibrillation which deviates from normal sinus rhythm.   Two algorithms for detection of atrial fibrillation have been evaluated in Matlab. One is based on the coefficient of variation and the other uses a logistic regression model. Training and testing of the algorithms were done with data from the Physionet MIT database. Several steps of signal processing were used to remove different types of noise and artefacts before the data could be used.   When testing the algorithms, the CV algorithm performed with a sensitivity of 91,38%, a specificity of 93,93% and accuracy of 92,92%, and the results of the logistic regression algorithm was a sensitivity of 97,23%, specificity of 93,79% and accuracy of 95,39%. The logistic regression algorithm performed better and was chosen for implementation in Java, where it achieved a sensitivity of 97,31%, specificity of 93,47% and accuracy of 95,25%.<br>Förmaksflimmer är en vanlig hjärtrytmrubbning som kännetecknas av en avsaknad eller oregelbunden kontraktion av förmaken. Sjukdomen är en riskfaktor för andra allvarligare sjukdomar och de totala kostnaderna för samhället är betydande. Det skulle därför vara fördelaktigt att effektivisera och förbättra prevention samt diagnostisering av förmaksflimmer.   Kliniskt diagnostiseras förmaksflimmer med hjälp av till exempel pulspalpation och auskultation av hjärtat, men diagnosen brukar fastställas med en EKG-undersökning. Det finns idag flertalet algoritmer för att detektera arytmin genom att analysera ett EKG. En av de vanligaste metoderna är att undersöka variabiliteten av hjärtrytmen (HRV) och utföra olika sorters statistiska beräkningar som kan upptäcka episoder av förmaksflimmer som avviker från en normal sinusrytm.   I detta projekt har två metoder för att detektera förmaksflimmer utvärderats i Matlab, en baseras på beräkningar av variationskoefficienten och den andra använder sig av logistisk regression. EKG som kommer från databasen Physionet MIT används för att träna och testa modeller av algoritmerna. Innan EKG-signalen kan användas måste den behandlas för att ta bort olika typer av brus och artefakter.   Vid test av algoritmen med variationskoefficienten blev resultatet en sensitivitet på 91,38%, en specificitet på 93,93% och en noggrannhet på 92,92%. För logistisk regression blev sensitiviteten 97,23%, specificiteten 93,79% och noggrannheten 95,39%. Algoritmen med logistisk regression presterade bättre och valdes därför för att implementeras i Java, där uppnåddes en sensitivitet på 91,31%, en specificitet på 93,47% och en noggrannhet på 95,25%.
APA, Harvard, Vancouver, ISO, and other styles
7

Ramkumar, M. "Some New Methods For Improved Fractal Image Compression." Thesis, 1996. https://etd.iisc.ac.in/handle/2005/1897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ramkumar, M. "Some New Methods For Improved Fractal Image Compression." Thesis, 1996. http://etd.iisc.ernet.in/handle/2005/1897.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Peak Signal Noise Rate (PSNR)"

1

Ma, Xiaoyu, Kunmei Li, Zhiwei Wang, et al. "Hybrid Noise Eliminating Algorithm for Radar Target Images Based on the Time-Frequency Domain." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4355-1_38.

Full text
Abstract:
AbstractThe radar target imaging effect directly affects the resolution of the radar target, which affects the commander’s decision. However, the hybrid noise composed of speckle and Gaussian noise is one of the main affecting factors. The existing methods for image denoising are hard to eliminate the hybrid noise in radar images. Hence, this paper proposes a new hybrid noise elimination algorithm for the radar target image. Based on the strong correlation between wavelet coefficients, this algorithm first uses the wavelet coefficient correlation denoising algorithm (WCCDA) to filter the high-frequency information and high-frequency part of low-frequency information for different directions of the three channels of the image. Then, an improved adaptive median filtering algorithm (IAMF) is proposed to perform fine-grained filtering on each re-constructed channel. Finally, the radar target image is reconstructed. The results show that the proposed algorithm outperforms the comparison approaches in the peak signal-to-noise ratio (PSNR) and mean-square error (MSE) indexes with better denoising effects.
APA, Harvard, Vancouver, ISO, and other styles
2

Deshpande Anand and Patavardhan Prashant P. "Super-Resolution of Long Range Captured Iris Image Using Deep Convolutional Network." In Advances in Parallel Computing. IOS Press, 2017. https://doi.org/10.3233/978-1-61499-822-8-244.

Full text
Abstract:
This chapter proposes a deep convolutional neural network based super-resolution framework to super-resolve and to recognize the long-range captured iris image sequences. The proposed framework is tested on CASIA V4 iris database by analyzing the peak signal-to-noise ratio (PSNR), structural similarity index matrix (SSIM) and visual information fidelity in pixel domain (VIFP) of the state-of-art algorithms. The performance of the proposed framework is analyzed for the upsampling factors 2 and 4 and achieved PSNRs of 37.42 dB and 34.74 dB respectively. Using this framework, we have achieved an equal error rate (EER) of 0.14%. The results demonstrate that the proposed framework can super-resolve the iris images effectively and achieves better recognition performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Sasirekha K. and Thangavel K. "A Novel Biometric Image Enhancement Approach With the Hybridization of Undecimated Wavelet Transform and Deep Autoencoder." In Handbook of Research on Machine and Deep Learning Applications for Cyber Security. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9611-0.ch012.

Full text
Abstract:
For a long time, image enhancement techniques have been widely used to improve the image quality in many image processing applications. Recently, deep learning models have been applied to image enhancement problems with great success. In the domain of biometric, fingerprint and face play a vital role to authenticate a person in the right way. Hence, the enhancement of these images significantly improves the recognition rate. In this chapter, undecimated wavelet transform (UDWT) and deep autoencoder are hydridized to enhance the quality of images. Initially, the images are decomposed with Daubechies wavelet filter. Then, deep autoencoder is trained to minimize the error between reconstructed and actual input. The experiments have been conducted on real-time fingerprint and face images collected from 150 subjects, each with 10 orientations. The signal to noise ratio (SNR), peak signal to noise ratio (PSNR), mean square error (MSE), and root mean square error (RMSE) have been computed and compared. It was observed that the proposed model produced a biometric image with high quality.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Meifeng, Guoyun Zhong, Yueshun He, Kai Zhong, Hongmao Chen, and Mingliang Gao. "Fast HEVC Inter-Prediction Algorithm Based on Matching Block Features." In Research Anthology on Recent Trends, Tools, and Implications of Computer Programming. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3016-0.ch012.

Full text
Abstract:
A fast inter-prediction algorithm based on matching block features is proposed in this article. The position of the matching block of the current CU in the previous frame is found by the motion vector estimated by the corresponding located CU in the previous frame. Then, a weighted motion vector computation method is presented to compute the motion vector of the matching block of the current CU according to the motions of the PUs the matching block covers. A binary decision tree is built to decide the CU depths and PU mode for the current CU. Four training features are drawn from the characteristics of the CUs and PUs the matching block covers. Simulation results show that the proposed algorithm achieves average 1.1% BD-rate saving, 14.5% coding time saving and 0.01-0.03 dB improvement in peak signal-to-noise ratio (PSNR), compared to the present fast inter-prediction algorithm in HEVC.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Fangfang, Vladimir Vasilyevich Lukin, Krzysztof Okarma, Yanjun Fu, and Jiangang Duan. "Intelligent Lossy Compression Method of Providing a Desired Visual Quality for Images of Different Complexity." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220050.

Full text
Abstract:
Lossy compression plays a vital role in modern digital image processing for producing a high compression ratio. However, distortion is unavoidable, which affects further image processing and must be handled with care. Providing a desired visual quality is an efficient approach for reaching a trade-off between introduced distortions and compression ratio; it aims to control the visual quality of the decompressed images and make them not worse than the required by a user. This paper proposes an intelligent lossy compression method of providing a desired visual quality, which considers the complexity of various images. This characteristic is utilized to choose an appropriate average rate-distortion curve for an image to be compressed. Experiments have been conducted for Discrete Cosine Transform (DCT) based lossy compression coder, Peak Signal-Noise Ratio (PSNR) has been employed to evaluate the visual quality. The results show that our new method has the ability to provide a general improvement of accuracy, and the proposed algorithm for classifying image complexity by entropy calculation is simpler and faster than earlier proposed counterparts. In addition, it is possible to find “strange” images which produce the largest errors in providing a desired quality of compression.
APA, Harvard, Vancouver, ISO, and other styles
6

"Steganography Techniques Based on Modulus Function and PVD Against PDH Analysis and FOBP." In Advanced Digital Image Steganography Using LSB, PVD, and EMD. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7516-0.ch007.

Full text
Abstract:
This chapter proposes two improved steganography techniques by addressing two problems in the existing literature. The first proposed technique is modulus function-based steganography and it addresses pixel difference histogram (PDH) analysis. The modulus function is used to calculate an evaluation function and based on the value of the evaluation function embedding decision is taken. There are two variants of this technique: (1) modulus 9 steganography and (2) modulus 16 steganography. In modulus 9 steganography, the embedding capacity in a pair of pixels is 3 bits, and in modulus 16 steganography the embedding capacity in a pair of pixels is 4 bits. Both the variants possess higher PSNR values. The experimental results prove that the PDH analysis cannot detect this technique. The second proposed technique is based on pixel value differencing with modified least significant bit (MLSB) substitution and it addresses fall off boundary problem (FOBP). This technique operates on 2×2 pixel blocks. In one of the pixels of a block data hiding is performed using MLSB substitution. Based on the new value of this pixel, three difference values with three neighboring pixels are calculated. Using these difference values, PVD approach is applied. Experimental results prove that the PDH analysis and RS analysis is unable to detect this proposed technique. The recorded values of bit rate and peak signal-to-noise ratio are also satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Ping. "A Study on Image Denoising Under Multi-Objective-Based Algorithm." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia241120.

Full text
Abstract:
In order to improve the image quality in neutron imaging, a denoising method combining particle swarm optimization (PSO) algorithm and wavelet threshold function is adopted in this study. The denoising threshold is adjusted by particle swarm optimization algorithm to effectively reduce Poisson noise and maintain image details. Experimental results show that compared with other methods, this method is more effective in removing noise, and can significantly improve the peak signal to noise ratio (PSNR) and reduce the mean square error (MSE) of the image, thus improving the image quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Fradi, Marwa, Kais Bouallegue, Philippe Lasaygues, and Mohsen Machhout. "Automatic Noise Reduction in Ultrasonic Computed Tomography Image for Adult Bone Fracture Detection." In Biomedical Engineering. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.101714.

Full text
Abstract:
Noise reduction in medical image analysis is still an interesting hot topic, especially in the field of ultrasonic images. Actually, a big concern has been given to automatically reducing noise in human-bone ultrasonic computed tomography (USCT) images. In this chapter, a new hardware prototype, called USCT, is used but images given by this device are noisy and difficult to interpret. Our approach aims to reinforce the peak signal-to-noise ratio (PSNR) in these images to perform an automatic segmentation for bone structures and pathology detection. First, we propose to improve USCT image quality by implementing the discrete wavelet transform algorithm. Second, we focus on a hybrid algorithm combining the k-means with the Otsu method, hence improving the PSNR. Our assessment of the performance shows that the algorithmic approach is comparable with recent methods. It outperforms most of them with its ability to enhance the PSNR to detect edges and pathologies in the USCT images. Our proposed algorithm can be generalized to any medical image to carry out automatic image diagnosis due to noise reduction, and then we have to overcome classical medical image analysis by achieving a short-time process.
APA, Harvard, Vancouver, ISO, and other styles
9

Hubert, G., and S. Silvia Priscila. "Efficient Noise Removal From Preterm Baby Retinopathy Images Using Various Filtering Approaches." In Clinical and Comparative Research on Maternal Health. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-5941-9.ch003.

Full text
Abstract:
For healthcare practitioners to perform reliable and precise assessments, assuring early detection and appropriate treatment of retinal illnesses, high-quality, noise-free images, is essential. An essential step in improving the reliability and precision of medical diagnoses is the reduction of noise from premature newborns' retinopathy photos. Effective noise removal can be accomplished by using various filters and methods. In recent days, research has been on the effectiveness of noise removal methods is available, specifically the homomorphic filter (HF), laplacian of gaussian (LOG) filter and adaptive filter (AF), with a focus on improving the clarity of retinopathy photos in preterm infants. The authors carefully compared the results with those of homomorphic, LOG, and adaptive filters via thorough testing and assessment criteria such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), and Structural similarity index (SSIM). Applying the LOG filter produced better outcomes for each studied output parameter, producing MSE of 0.000119, PSNR of 42.34and SSIM of 0.998, respectively. The tool used for execution is Python.
APA, Harvard, Vancouver, ISO, and other styles
10

Zangana, Hewa Majeed, and Firas Mahmood Mustafa. "Wavelet-Autoencoder Hybrid Model for Enhanced Image Denoising in Medical Imaging." In Advances in Medical Diagnosis, Treatment, and Care. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-9816-6.ch019.

Full text
Abstract:
This chapter proposes a novel hybrid approach that combines the strengths of wavelet transform with the powerful learning capabilities of autoencoder networks to achieve superior denoising performance. By leveraging wavelet decomposition to process images at multiple scales and feeding these decomposed signals into a deep autoencoder network, we effectively suppress noise while maintaining high-frequency details. Extensive experiments demonstrate that our method outperforms existing techniques, yielding significant improvements in both peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The results suggest that the integration of wavelet transform and autoencoder networks offers a promising solution for robust image denoising, especially in scenarios with complex noise patterns.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Peak Signal Noise Rate (PSNR)"

1

Stephenson, James, and Charles Tinney. "Extracting Blade Vortex Interactions using Continuous Wavelet Transforms." In Vertical Flight Society 70th Annual Forum & Technology Display. The Vertical Flight Society, 2014. http://dx.doi.org/10.4050/f-0070-2014-9416.

Full text
Abstract:
An extraction method is proposed to investigate blade vortex interaction noise emitted during helicopter transient maneuvering flight. The extraction method allows for the investigation of blade vortex interactions, independent of other sound sources. It is based on filtering the spectral representation of experimentally acquired full-scale helicopter acoustic data. The data is first transformed into time-frequency space through the wavelet transformation, with blade vortex interactions identified and filtered by their high amplitude, high frequency impulsive content. The filtered wavelet coefficients are then inverse transformed to create a pressure signature solely related to blade vortex interactions. Analysis on a synthetic data set is conducted, and it is shown that blade vortex interactions can be accurately extracted so long as the blade vortex interaction peak energy signal is greater or equal to the energy in the main rotor harmonic. A brief analysis shows that the extraction method performs admirably throughout a fast advancing side roll maneuver. Using this method, it was shown that peak blade vortex interaction noise levels are linked directly to the roll rate of the vehicle, and are directed towards the retreating side during the transient portion of the maneuver.
APA, Harvard, Vancouver, ISO, and other styles
2

Bignold, G. J., and G. P. Quirk. "Electrochemical Noise Measurements in a 500 MW Steam Turbine to Maximise Lifetime under Changing Operational Demands." In CORROSION 2002. NACE International, 2002. https://doi.org/10.5006/c2002-02333.

Full text
Abstract:
Abstract Steam turbine blade and disc failures have occurred from time to time throughout the world. Although they are rare events, the implications for safety, for repair costs and for loss of availability are severe. Current operational and maintenance practices for any particular turbine design may be based on many years of satisfactory service. However, recent deregulation of the power industry in the UK and forthcoming deregulation in the USA may have an effect on operational and reliability issues, as changing operational demands are placed upon power generators through economic forces. In the UK in the 1990s with deregulation and privatization of power generation, the introduction of high efficiency gas fired stations has transformed the profile of the industry. The nuclear stations, which can only operate safely under base load conditions, were protected by government regulation (the so-called “nuclear levy”). These gas fired and nuclear plants, together with those coal-fired stations that have been fitted with flue gas desulfurization (FGD) systems, accounted for the majority of base load supply. The remaining coal-fired stations were, therefore, forced towards operating for peak demand rather than base load. They have had to develop operating procedures that enable them to provide power flexibly and economically with rapid response to variable demand. One of these procedures covers running up the unit to stable conditions before power is produced. This paper gives an account of the research carried out to confirm the risks of turbine damage, which was essentially due to corrosion during the start-up sequence. The paper illustrates the research that was carried out over 7 months using on-line corrosion monitoring with the electrochemical noise technique. Probes were installed directly within the low pressure (LP) section of an operating 500 MW turbine to gather data that would demonstrate the cause of pitting corrosion at the turbine blade root. This pitting had been found to be the primary factor in the onset of stress corrosion cracking (SCC), which eventually caused the failure of a turbine blade with very serious consequences for the unit. Custom-built probes were fabricated from sections of turbine blades to allow corrosion monitoring of the blade material. A high integrity design enabled these probes to be mounted within 30 cm of the last row of turbine blades in the LP section. They were mounted level with the blade root where pitting had been observed, so that the probes would experience as nearly as possible the same environment as the turbine blades (including the effects of cooling sprays on the last stage). Signal cables were routed out of the turbine through stainless steel piping to an instrumentation unit mounted on the top of the turbine casing. This instrumentation carried out the corrosion monitoring on a second-by-second basis and the digitized signals were then transmitted about 300 m to the data acquisition system installed in a computer room above the station control room. The early results showed that the corrosion in this unit did indeed occur predominantly during the start-up sequence, coinciding with the presence of chloride contamination in the condensate, which was sprayed onto the final stage for blade cooling. After 3 months of the monitoring program, engineering modifications were completed on the spray system to eliminate the risk of spraying with contaminated condensate. The corrosion monitoring then continued for a further 4 months and demonstrated that the rate of the corrosion had been very significantly reduced. This quantitative demonstration of reduction of corrosion activity enabled rescheduling of ultrasonic inspections of the blade roots, with a consequent increase in station availability.
APA, Harvard, Vancouver, ISO, and other styles
3

Guruprasad, Kamalesh Kumar Mandakolathur, Gayatri Sunil Ambulkar, and Geetha Nair. "Federated Learning for Seismic Data Denoising: Privacy-Preserving Paradigm." In International Petroleum Technology Conference. IPTC, 2024. http://dx.doi.org/10.2523/iptc-23888-ms.

Full text
Abstract:
Summary Federated Learning (FL) is a framework that empowers multiple clients to develop robust machine learning (ML) algorithms while safeguarding data privacy and security. This paper's primary goal is to investigate the capability of the FL framework in preserving privacy and to assess its efficacy for clients operating within the oil and gas industry. To demonstrate the practicality of this framework, we apply it to seismic denoising use cases incorporating data from clients with IID (independent &amp; and identically distributed) and Non-IID (non-independent and non-identically distributed) or domain-shifted data distributions. The FL setup is implemented using the well-established Flower framework. The experiment involves injecting noise into 3D seismic data and subsequently employing various ML algorithms to eliminate this noise. All experiments were conducted using both IID and Non-IID data, employing both traditional and FL approaches, various tests considering different types of noise, noise factors, number of 2D seismic slices, diverse models, number of clients, and aggregations strategies. We tested different model aggregation strategies, such as FedAvg, FedProx, and Fedcyclic, alongside client selection strategies that consider model divergence, convergence trend similarity, and client weight analysis to improve the aggregation process. We also incorporated batch normalization into the network architecture to reduce data discrepancies among clients. The denoising process was evaluated using metrics like mean-square-error (MSE), signal-to-noise ratio (SNR), and peak signal-to-noise ratio (PSNR). A comparison between conventional methods and FL demonstrated that FL exhibited a reduced error rate, especially when dealing with larger datasets. Furthermore, FL harnessed the power of parallel computing, resulting in a notable 30% increase in processing speed, enhanced resource utilization, and a remarkable 99% reduction in communication costs. To sum it up, this study underscores the potential of FL in the context of seismic denoising, safeguarding data privacy, and enhancing overall performance. We addressed the associated challenges by experimenting with various approaches for client selection and aggregation within a privacy-preserving framework. Notably, among these aggregation strategies, FedCyclic stands out as it offers faster convergence, achieving performance levels comparable to FedAvg and FedProx with fewer training iterations.
APA, Harvard, Vancouver, ISO, and other styles
4

Takahashi, Kanato, Masaomi Kimura, Imam Mukhlash, and Mohammad Iqbal. "A Method for Adversarial Example Generation Using Wavelet Transformation." In AHFE 2023 Hawaii Edition. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004250.

Full text
Abstract:
With the advance of Deep Neural Networks (DNN), the accuracy of various tasks in machine learning has dramatically improved. Image classification is one of the most typical tasks. However, various papers have pointed out the vulnerability of DNN.It is known that small changes to an image can easily makes the DNN model misclassify it. The images with such small changes are called adversarial examples. This vulnerability of DNN is a major problem in practical image recognition. There have been researches on the methods to generate adversarial examples and researches on the methods to defense DNN models not to be fooled by adversarial example. In addition, the transferability of the adversarial example can be used to easily attack a model in a black-box attack situation. Many of the attack methods used techniques to add perturbations to images in the spatial domain. However, we focus on the spatial frequency domain and propose a new attack method.Since the low-frequency component is responsible for the overall tendency of color distributions in the images, it is easy to see the change if modified. On the other hand, the high-frequency component of an image holds less information than the low-frequency component. Even if it is changed, the change is less apparent in the appearance of the image. Therefore, it is difficult to perceive an attack on the high-frequency component at a glance, which makes it easy to attack. Thus, by adding perturbation to the high-frequency components of the images, we can expect to generate adversarial examples that appear similar to the original image with human eyes.R. Duan et al. used a discrete cosine transformation for images when focusing on the spatial frequency domain. This was a method by use of quantization, which drops the information that DNN models would have extracted. However, this method has the disadvantage that block-like noise appears in a resultant image because the target image is separated by 8 × 8 to apply the discrete cosine transformation. In order to avoid such disadvantage, we propose a method which applies the wavelet transformation to target images. Reduction of the information in the high-frequency component changes the image with the perturbation that is not noticeable, which results in a smaller change of the image than previous studies. For experiments, the peak signal to noise ratio (PSNR) was used to quantify how much the image was degraded from the original image. In our experiments, we compared the results of our method with different learning rates used to generate perturbations with the previous study and found that the maximum learning rate of our method was about 43, compared to about 32 in the previous study. Unlike previous studies, the attached success rate was also improved without using quantization: our method improved attack accuracy by about 9% compared to the previous work.
APA, Harvard, Vancouver, ISO, and other styles
5

Kong, Qiuqiang, Yong Xu, Philip J. B. Jackson, Wenwu Wang, and Mark D. Plumbley. "Single-Channel Signal Separation and Deconvolution with Generative Adversarial Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/381.

Full text
Abstract:
Single-channel signal separation and deconvolution aims to separate and deconvolve individual sources from a single-channel mixture. Single-channel signal separation and deconvolution is a challenging problem in which no prior knowledge of the mixing filters is available. Both individual sources and mixing filters need to be estimated. In addition, a mixture may contain non-stationary noise which is unseen in the training set. We propose a synthesizing-decomposition (S-D) approach to solve the single-channel separation and deconvolution problem. In synthesizing, a generative model for sources is built using a generative adversarial network (GAN). In decomposition, both mixing filters and sources are optimized to minimize the reconstruction error of the mixture. The proposed S-D approach achieves a peak-to-noise-ratio (PSNR) of 18.9 dB and 15.4 dB in image inpainting and completion, outperforming a baseline convolutional neural network PSNR of 15.3 dB and 12.2 dB, respectively and achieves a PSNR of 13.2 dB in source separation together with deconvolution, outperforming a convolutive non-negative matrix factorization (NMF) baseline of 10.1 dB.
APA, Harvard, Vancouver, ISO, and other styles
6

M, Jeba Jenitha, Kani Jesintha D, and Mahalakshmi P. "Noise Adaptive Fuzzy Switching Median Filters for Removing Gaussian Noise." In The International Conference on scientific innovations in Science, Technology, and Management. International Journal of Advanced Trends in Engineering and Management, 2023. http://dx.doi.org/10.59544/ozsc7243/ngcesi23p113.

Full text
Abstract:
Recently, in all image processing systems, image restoration plays a major role and it forms the major part of image processing systems. Medical images such as brain Magnetic Resonance Imaging (MRI), ultrasound images of liver and kidney, retinal images and images of uterus images are often affected by various types of noises such as Gaussian noise and salt and pepper noise. All image restoration techniques attempts to remove various types of noises. This paper deals with various filters namely Mean Filter, Averaging Filter, Median Filter, Adaptive Median Filter, Adaptive Weighted Median Filter, Gabor Filter and Noise Adaptive Fuzzy Switching Median Filter (NAFSM) for removing salt and pepper noise. Among all the filters, NAFSM removes the Gaussian noise better than the other filters and the performance of all the filters are compared using metrics such as PSNR (Peak Signal to Noise Ratio), MSE (Mean Square Error), NAE (Normalized Absolute Error), Normalized Cross Correlation (NK), Average Difference (AD), Maximum Difference (MD), SC (Structural Content) and time elapsed to produce the denoised image.
APA, Harvard, Vancouver, ISO, and other styles
7

Kaur, Jinder, Gurwinder Kaur, and Ashwani Kumar. "An Improved Method to Remove Salt and Pepper Noise in Noisy Images." In International Conference on Women Researchers in Electronics and Computing. AIJR Publisher, 2021. http://dx.doi.org/10.21467/proceedings.114.23.

Full text
Abstract:
In the field of image processing, removal of noise from Gray scale as well as RGB images is an ambitious task. The important function of noise removal algorithm is to eliminate noise from a noisy image. The salt and pepper noise (SPN) is frequently arising into Gray scale and RGB images while capturing, acquiring and transmitting over the insecure several communication mechanisms. In past, the numerous noise removal methods have been introduced to extract the noise from images adulterated with SPN. The proposed work introduces the SPN removal algorithm for Gray scale at low along with high density noise (10\% to 90\%). According to the different conditions of proposed algorithm, the noisy pixel is reconstructed by Winsorized mean or mean value of all pixels except the centre pixel which are present in the processing window. The noise from an image can be removed by using the proposed algorithm without degrading the quality of image. The performance evaluation of proposed and modified decision based unsymmetric median filter (MDBUTMF) is done on the basis of different performance parameters such as Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Image Enhancement Factor (IEF) and Structure Similarity Index Measurement (SSIM).
APA, Harvard, Vancouver, ISO, and other styles
8

Subramanian, Nandhini, ,. Jayakanth Kunhoth, Somaya Al-Maadeed, and Ahmed Bouridane. "Stego-eHealth: An eHealth System for Secured Transfer of Medical Images using Image Steganography." In Qatar University Annual Research Forum & Exhibition. Qatar University Press, 2021. http://dx.doi.org/10.29117/quarfe.2021.0155.

Full text
Abstract:
COVID pandemic has necessitated the need for virtual and online health care systems to avoid contacts. The transfer of sensitive medical information including the chest and lung X-ray happens through untrusted channels making it prone to many possible attacks. This paper aims to secure the medical data of the patients using image steganography when transferring through untrusted channels. A deep learning method with three parts is proposed – preprocessing module, embedding network and the extraction network. Features from the cover image and the secret image are extracted by the preprocessing module. The merged features from the preprocessing module are used to output the stego image by the embedding network. The stego image is given as the input to the extraction network to extract the ingrained secret image. Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) are the evaluation metrics used. Higher PSNR value proves the higher security; robustness of the method and the image results show the higher imperceptibility. The hiding capacity of the proposed method is 100% since the cover image and the secret image are of the same size.
APA, Harvard, Vancouver, ISO, and other styles
9

S. Mahdi, Noor, and Ghadah K. AL-Khafaji. "Adaptive Color Image Compression Using ADJPEG and ISUQ of Hierarchical Decomposition Scheme." In 5TH INTERNATIONAL CONFERENCE ON COMMUNICATION ENGINEERING AND COMPUTER SCIENCE (CIC-COCOS'24). Cihan University-Erbil, 2024. http://dx.doi.org/10.24086/cocos2024/paper.1544.

Full text
Abstract:
This paper introduced a lossy color compression system of transform coding (TC) based of discrete wavelet transform (DWT), discrete cosine transform (DCT) and quantization schemes to achieve high compression ratio (CR) with preserving quality. The proposed compression system comprises the following steps, firstly, separating the image into source/non source color bands, then quantizing the source band uniformity, followed by decomposing an image by a three-level DWT and applying huffman coding to the approximation sub band, and compressed the details sub bands of each level by iterative scalar uniform quantization (ISUQ), while the non-source bands apply adaptive developed JPEG (ADJPEG) is utilized with the minimize matrix size algorithm (MMSA) and two integer keys base to reduce AC coefficients effectively. For testing the performance of the suggested compression system, three standard images of size (256×256) pixels adopted. The suggested technique showed superior performance in terms of reconstructed (decoded) image quality and CR, where the CR is between 28-32 with a peak signal-to-noise ratio (PSNR) value between 39-42 dB and the CR of JPEG is between 13-16 with a PSNR value between 33-37 dB.
APA, Harvard, Vancouver, ISO, and other styles
10

Negreiros, Ana Cláudia Souza Vidal de, Gilson Giraldi, Heron Werner, and Ítalo Messias Feliz Santos. "Self-Supervised Image Denoising Methods: an Application in Fetal MRI." In Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/wvc.2023.27546.

Full text
Abstract:
The process of image denoising in magnetic resonance imaging (MRI) is more and more common and important in the medical area. However, it is usual that state-of-the-art deep learning methods require pair images (clean and noisy ones) to train the models which poses limitations in practice. In this sense, this work applied two recent techniques that do not need a clean image to train the models and reached good results for denoising tasks. We applied the NOISE2NOISE (N2N) and the NOISE2VOID (N2V) learning approaches and compared the results for denoising tasks using a fetal MRI dataset. The results showed that the N2N method outperformed the N2V one, considering the Peak Signal-to-Noise Ratio (PSNR), Root Mean Squared Error (RMSE) evaluation metrics, and visual analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography