To see the other types of publications on this topic, follow the link: Additive White Gaussian Noise (AWGN).

Dissertations / Theses on the topic 'Additive White Gaussian Noise (AWGN)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Additive White Gaussian Noise (AWGN).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shu, Li 1970. "A power interval perspective on additive white Gaussian noise (AWGN) channels." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9118.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 215-216).
We present a new perspective on additive white Gaussian noise (AWGN) channels that separates user and channel attributes. Following this separation, various questions concerning achievability and successive decoding can be reformulated as properties of the set of user attributes, which can be determined independently of the actual channel noise. To obtain these properties directly, we introduce a graphical framework called the power diagram. Based on graphical manipulations in this framework, our results on N-user multi-access channels include the following: 1. simplifying the achievability condition to an algorithm requiring 0 (N In N) computations 2. simplifying the check of whether a given rate tuple is decodable with simple successive decoding (to be defined) to an algorithm requiring 0(N ln N) computations 3. developing a technique for power-reduced successive decoding, accompanied by the set of rate tuples for which such a technique is applicable, and an algorithm that checks whether a given rate tuple is decodable with this technique requiring O(N In N) computations 4. presenting a class of graphical constructions for splitting any achievable rate tuple into a set of virtual users that allows successive decoding. These constructions deal with rate tuples not on the dominant face in a natural way, whereas previous works have viewed these rate tuples as a somewhat ad hoc extension of the dominant face results 5. presenting a class of graphical constructions that facilitate successive decoding to any achievable rate tuple using the time-sharing technique, improving the known upper bound on decoding complexity (using this combination of techniques) to 2N - !
by Li Shu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Argyriou, Andreas. "Probability of symbol error for coherent and non-coherent detection of M-ary frequency-shift keyed (MFSK) signals affected by co-channel interference and additive white Gaussian noise (AWGN) in a fading channel." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA376826.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, March 2000.
Thesis advisor(s): Lebaric, Jovan; Robertson, Clark. Includes bibliographical references (p. 289). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
3

Čermák, Josef. "Modelování rušení pro xDSL." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217281.

Full text
Abstract:
This work is focused on the subject of the interference modelling for xDSL technologies. First, the xDSL technologies are explained. Following is the presentation and description of the different kinds of the xDSL technologies. The next part deals with the basic parameters of metallic cable lines – especially the primary and secondary parameters. Nowadays wider bandwidths are used for the achievement of higher data transmission rates. During a higher frequency signal transmission a more intensive line attenuation appears. To identify the transfer characteristics of the lines while using an xDSL system, mathematic models of transmission lines are applied. That is why these mathematic models are dealt with in the next chapter. At the end of this section the mathematic models are compared using the modular and phase characteristics. The main aim of the work is to describe the different impacts which influence the efficiency of the xDSL systems. First, the causes interfering from the inside of the cable are deeply explained: Near End Crosstalk (NEXT), Far End Crosstalk (FEXT), Additive White Gaussian Noise (AWGN). Following is the explanation of the external interfering impacts: Radio Frequency Interference (RFI) and Impulse Noise. The next goal of this thesis is a design of a workstation for the tests of spectral features and the efficiency of the xDSL systems. The work also presents a designed GUI application and its description. The GUI application is an instrument for the choice or data entry of the final interference. The last chapter describes a realization of a measurement and shows the measured characteristics which were recorded on the ADSL tester and oscilloscope.
APA, Harvard, Vancouver, ISO, and other styles
4

Novotný, František. "Analýza a modelování přeslechů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220322.

Full text
Abstract:
The thesis concerns the problem of interference modelling for xDSL technologies and Ethernet. The introduction describes the origin of crosstalk, that arise during the operation of the systems and the physical properties of the lines, therefore, the next section describes the properties of the primary and secondary parameters of the homogenous line and their modelling. In order to achieve higher data rates on the metallic line, systems with larger frequency spectrum are applied, resulting in a greater attenuation of the line. This issue and the characteristics determination of the transmission systems are subjects of the mathematical models, which are divided according to the modelling of primary or secondary parameters. The main goal of this work is to describe the effects which influence the performance of data transfer via xDSL and Ethernet technology focusing on internal and external disturbances acting on the cable lines. This is the crosstalk at the near and far end, adaptive white noise, radio frequency interference RFI and impulse noise. Following part of the thesis deals with the properties of xDSL technologies, specifically ADSL2+ and VDSL2 and Ethernet. Another aim is to design applications which enable to test the performance of xDSL and Ethernet transmission systems with its own award simulations interference. The conclusion describes the design and implementation of laboratory experiments for measuring of the efficiency and spectral properties of xDSL. The proposed laboratory protocols are annexed to this thesis, including the measured waveforms.
APA, Harvard, Vancouver, ISO, and other styles
5

Erdogan, Ahmet Yasin. "Analysis of the effects of phase noise and frequency offset in orthogonal frequency division multiplexing (OFDM) systems." Thesis, Monterey California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1712.

Full text
Abstract:
Approved for public release, distribution is unlimited
Orthogonal frequency division multiplexing (OFDM) is being successfully used in numerous applications. It was chosen for IEEE 802.11a wireless local area network (WLAN) standard, and it is being considered for the fourthgeneration mobile communication systems. Along with its many attractive features, OFDM has some principal drawbacks. Sensitivity to frequency errors is the most dominant of these drawbacks. In this thesis, the frequency offset and phase noise effects on OFDM based communication systems are investigated under a variety of channel conditions covering both indoor and outdoor environments. The simulation performance results of the OFDM system for these channels are presented.
Lieutenant Junior Grade, Turkish Navy
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Brian. "Efficient communication over additive white Gaussian noise and intersymbol interference channels using chaotic sequences." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Weizheng. "Investigation on Digital Fountain Codes over Erasure Channels and Additive White Gaussian Noise Channels." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1336067205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ni, Li. "Non-equiprobable multi-level coding for the additive white Gaussian noise channel with Tikhonov phase error." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Dissertations/Fall2005/l%5Fni%5F120905.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

DeRieux, David A. "Investigation of spectral-based techniques for classification of wideband transient signals in additive white Gaussian noise." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA282954.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, March 1994.
Thesis advisor(s): Ralph Hippenstiel, Monique P. Fargues. "March 1994." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
10

Ng, Jimmy Hon-yuen. "Estimation of error rates and fade distributions on a Rayleigh fading channel with additive white Gaussian noise." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26318.

Full text
Abstract:
Several characteristics of the Rayleigh fading channel are examined. A digital Rayleigh fading simulator is used to generate the (fading) signal envelope from which various statistics are derived. Based on the simulation results, a simple model is proposed in order to estimate the block error rate of a block of N data bits transmitted over the Rayleigh fading channel in the presence of additive white Gaussian noise. This model gives an average estimation error of about 4 % over the range of blocksizes N = 63, 127, 255, 511, 1023, 2047 (bits), average signal-to-noise ratios 70 = 5 to 35 (dB) and fading frequencies f[sub D] = 10 to 90 (Hz) corresponding to vehicle speeds of 8 to 71 MPH at a radio carrier frequency of 850 MHz. A second somewhat more complex model for estimating the block error rate is found to yield a lower average estimation error of 2.4 % over the same set of simulated data. The probability distributions of the fade rate and the fade duration are also examined. Empirical models are derived for the estimation of the probability mass function of the fade rate and the probability density function of the fade duration. These empirical models allow fairly accurate estimates without the need for cosdy and time-consuming simulations. The probability of m-bit errors in an N-bit block is an important parameter in the design of error-correcting codes for use on the mobile radio channel. However, such probabilities are difficult to determine without performing extensive simulation or field trials. An approach to estimate them empirically is proposed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Fougias, Nikolaos. "High speed network access to the last-mile using fixed broadband wireless." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FFougias.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management and M.S. in Computer Science)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Burt Lundy. Includes bibliographical references (p. 99-100). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
12

Barkat, Braham. "Design, estimation and performance of time-frequency distributions." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sucic, Victor. "Parameters selection for optimising time-frequency distributions and measurements of time-frequency characteristics of nonstationary signals." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15834/1/Victor_Sucic_Thesis.pdf.

Full text
Abstract:
The quadratic class of time-frequency distributions (TFDs) forms a set of tools which allow to effectively extract important information from a nonstationary signal. To determine which TFD best represents the given signal, it is a common practice to visually compare different TFDs' time-frequency plots, and select as best the TFD with the most appealing plot. This visual comparison is not only subjective, but also difficult and unreliable especially when signal components are closely-spaced in the time-frequency plane. To objectively compare TFDs, a quantitative performance measure should be used. Several measures of concentration/complexity have been proposed in the literature. However, those measures by being derived with certain theoretical assumptions about TFDs are generally not suitable for the TFD selection problem encountered in practical applications. The non-existence of practically-valuable measures for TFDs' resolution comparison, and hence the non-existence of methodologies for the signal optimal TFD selection, has significantly limited the use of time-frequency tools in practice. In this thesis, by extending and complementing the concept of spectral resolution to the case of nonstationary signals, and by redefining the set of TFDs' properties desirable for practical applications, we define an objective measure to quantify the quality of TFDs. This local measure of TFDs' resolution performance combines all important signal time-varying parameters, along with TFDs' characteristics that influence their resolution. Methodologies for automatically selecting a TFD which best suits a given signal, including real-life signals, are also developed. The optimisation of the resolution performances of TFDs, by modifying their kernel filter parameters to enhance the TFDs' resolution capabilities, is an important prerequisite in satisfying any additional application-specific requirements by the TFDs. The resolution performance measure and the accompanying TFDs' comparison criteria allow to improve procedures for designing high-resolution quadratic TFDs for practical time-frequency analysis. The separable kernel TFDs, designed in this way, are shown to best resolve closely-spaced components for various classes of synthetic and real-life signals that we have analysed.
APA, Harvard, Vancouver, ISO, and other styles
14

Sucic, Victor. "Parameters Selection for Optimising Time-Frequency Distributions and Measurements of Time-Frequency Characteristics of Nonstationary Signals." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15834/.

Full text
Abstract:
The quadratic class of time-frequency distributions (TFDs) forms a set of tools which allow to effectively extract important information from a nonstationary signal. To determine which TFD best represents the given signal, it is a common practice to visually compare different TFDs' time-frequency plots, and select as best the TFD with the most appealing plot. This visual comparison is not only subjective, but also difficult and unreliable especially when signal components are closely-spaced in the time-frequency plane. To objectively compare TFDs, a quantitative performance measure should be used. Several measures of concentration/complexity have been proposed in the literature. However, those measures by being derived with certain theoretical assumptions about TFDs are generally not suitable for the TFD selection problem encountered in practical applications. The non-existence of practically-valuable measures for TFDs' resolution comparison, and hence the non-existence of methodologies for the signal optimal TFD selection, has significantly limited the use of time-frequency tools in practice. In this thesis, by extending and complementing the concept of spectral resolution to the case of nonstationary signals, and by redefining the set of TFDs' properties desirable for practical applications, we define an objective measure to quantify the quality of TFDs. This local measure of TFDs' resolution performance combines all important signal time-varying parameters, along with TFDs' characteristics that influence their resolution. Methodologies for automatically selecting a TFD which best suits a given signal, including real-life signals, are also developed. The optimisation of the resolution performances of TFDs, by modifying their kernel filter parameters to enhance the TFDs' resolution capabilities, is an important prerequisite in satisfying any additional application-specific requirements by the TFDs. The resolution performance measure and the accompanying TFDs' comparison criteria allow to improve procedures for designing high-resolution quadratic TFDs for practical time-frequency analysis. The separable kernel TFDs, designed in this way, are shown to best resolve closely-spaced components for various classes of synthetic and real-life signals that we have analysed.
APA, Harvard, Vancouver, ISO, and other styles
15

Σινάνης, Σπύρος. "Εκτίμηση συχνότητας απλών ημιτονοειδών σημάτων υπό την παρουσία λευκού γκαουσιανού θορύβου." Thesis, 2012. http://hdl.handle.net/10889/5391.

Full text
Abstract:
Στην παρούσα διπλωματική εργασία επιχειρείται η ανάλυση και η εκτίμηση της συχνότητας απλών ημιτονοειδών σημάτων υπό την παρουσία λευκού Γκαουσιανού θορύβου (AWGN).Η εκτίμηση παραμέτρων απλών ημιτονοειδών σημάτων υπό την παρουσία προσθετικού Γκαουσιανού θορύβου αποτελεί ένα κλασσικό πρόβλημα και σημαντικό αντικείμενο μελέτης εξαιτίας της πληθώρας των εφαρμογών που έχει στην θεωρία ελέγχου, στην επεξεργασία σημάτων, στις ψηφιακές επικοινωνίες, στην βιοϊατρική τεχνολογία κ.α.Η εκτίμηση της συχνότητας είναι συνήθως το θέμα ‘ζωτικής σημασίας’ του προβλήματος για δύο σημαντικούς λόγους. Αφ’ενός οι συχνότητες πρέπει να εκτιμηθούν διότι αποτελούν μη-γραμμικές συναρτήσεις στην ληφθείσα ακολουθία δεδομένων και αφ’ ετέρου έχοντας καθοριστεί οι συχνότητες, οι υπόλοιπες παράμετροι του σήματος όπως είναι το πλάτος και η φάση του, μπορούν να υπολογιστούν άμεσα. Αρχικά γίνεται μία σύντομη εισαγωγή στις βασικές έννοιες πάνω στις οποίες δομείται η εκτίμηση παραμέτρων ενός ημιτονοειδούς σήματος και έπειτα παρουσιάζονται μερικοί αλγόριθμοι εκτίμησης. Πιο συγκεκριμένα παρουσιάζεται η διαδικασία κατασκευής τους και αναλύονται οι επιδόσεις τους. Τέλος παραθέτουμε και προσομοιώσεις μέσω υπολογιστή για κάθε αλγόριθμο ξεχωριστά και συγκρίνουμε την επίδοση του καθενός με τους υπόλοιπους. Από την σύγκριση αυτή γίνεται εξαγωγή χρήσιμων συμπερασμάτων σχετικά με τον προσδιορισμό των παραμέτρων κάθε αλγόριθμου αλλά και με την καταλληλότητα κάθε αλγόριθμου για συγκεκριμένες συνθήκες θορύβου.
In this thesis attempts to analyze and estimate the frequency of single sinusoid signals in Additive White Gaussian Noise (AWGN). Parameter estimation of sinusoids has been a classical problem and it is still an important research topic because of its numerous applications in multiple disciplines such as control theory, signal processing, digital communications, biomedical engineering etc. Estimation of the frequencies is often the crucial step in the problem for two principally reasons. Firstly, frequencies should be estimated because they are nonlinear functions in the received data sequence and secondly, once frequencies have been determined, the remaining parameters, such as amplitude and phase, can then be computed straightforwardly. Primarily we introduce some basic concepts on parameters estimation of sinusoid signals and then several estimation algorithms. More specifically shows the fabrication process of these algorithms and analyze their performance. Finally, we quote computer simulations for each algorithm separately and compare their performance. From these comparisons we can draw conclusions on the determination of parameters for each algorithm and the appropriateness of algorithms for specific noise conditions.
APA, Harvard, Vancouver, ISO, and other styles
16

Deshpande, Naveen. "Constellation Constrained Capacity For Two-User Broadcast Channels." Thesis, 2010. https://etd.iisc.ac.in/handle/2005/1281.

Full text
Abstract:
A Broadcast Channel is a communication path between a single source and two or more receivers or users. The source intends to communicate independent information to the users. A particular case of interest is the Gaussian Broadcast Channel (GBC) where the noise at each user is additive white Gaussian noise (AWGN). The capacity region of GBC is well known and the input to the channel is distributed as Gaussian. The capacity region of another special case of GBC namely Fading Broadcast Channel (FBC)was given in [Li and Goldsmith, 2001]and was shown that superposition of Gaussian codes is optimal for the FBC (treated as a vector degraded Broadcast Channel). The capacity region obtained when the input to the channel is distributed uniformly over a finite alphabet(Constellation)is termed as Constellation Constrained(CC) capacity region [Biglieri 2005]. In this thesis the CC capacity region for two-user GBC and the FBC are obtained. In case of GBC the idea of superposition coding with input from finite alphabet and CC capacity was explored in [Hupert and Bossert, 2007]but with some limitations. When the participating individual signal sets are nearly equal i.e., given total average power constraint P the rate reward α (also the power sharing parameter) is approximately equal to 0.5, we show via simulation that with rotation of one of the signal sets by an appropriate angle the CC capacity region is maximally enlarged. We analytically derive the expression for optimal angle of rotation. In case of FBC a heuristic power allocation procedure called finite-constellation power allocation procedure is provided through which it is shown (via simulation)that the ergodic CC capacity region thus obtained completely subsumes the ergodic CC capacity region obtained by allocating power using the procedure given in[Li and Goldsmith, 2001].It is shown through simulations that rotating one of the signal sets by an optimal angle (obtained by trial and error method)for a given α maximally enlarges the ergodic CC capacity region when finite-constellation power allocation is used. An expression for determining the optimal angle of rotation for the given fading state, is obtained. And the effect of rotation is maximum around the region corresponding to α =0.5. For both GBC and FBC superposition coding is done at the transmitter and successive decoding is carried out at the receivers.
APA, Harvard, Vancouver, ISO, and other styles
17

Deshpande, Naveen. "Constellation Constrained Capacity For Two-User Broadcast Channels." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1281.

Full text
Abstract:
A Broadcast Channel is a communication path between a single source and two or more receivers or users. The source intends to communicate independent information to the users. A particular case of interest is the Gaussian Broadcast Channel (GBC) where the noise at each user is additive white Gaussian noise (AWGN). The capacity region of GBC is well known and the input to the channel is distributed as Gaussian. The capacity region of another special case of GBC namely Fading Broadcast Channel (FBC)was given in [Li and Goldsmith, 2001]and was shown that superposition of Gaussian codes is optimal for the FBC (treated as a vector degraded Broadcast Channel). The capacity region obtained when the input to the channel is distributed uniformly over a finite alphabet(Constellation)is termed as Constellation Constrained(CC) capacity region [Biglieri 2005]. In this thesis the CC capacity region for two-user GBC and the FBC are obtained. In case of GBC the idea of superposition coding with input from finite alphabet and CC capacity was explored in [Hupert and Bossert, 2007]but with some limitations. When the participating individual signal sets are nearly equal i.e., given total average power constraint P the rate reward α (also the power sharing parameter) is approximately equal to 0.5, we show via simulation that with rotation of one of the signal sets by an appropriate angle the CC capacity region is maximally enlarged. We analytically derive the expression for optimal angle of rotation. In case of FBC a heuristic power allocation procedure called finite-constellation power allocation procedure is provided through which it is shown (via simulation)that the ergodic CC capacity region thus obtained completely subsumes the ergodic CC capacity region obtained by allocating power using the procedure given in[Li and Goldsmith, 2001].It is shown through simulations that rotating one of the signal sets by an optimal angle (obtained by trial and error method)for a given α maximally enlarges the ergodic CC capacity region when finite-constellation power allocation is used. An expression for determining the optimal angle of rotation for the given fading state, is obtained. And the effect of rotation is maximum around the region corresponding to α =0.5. For both GBC and FBC superposition coding is done at the transmitter and successive decoding is carried out at the receivers.
APA, Harvard, Vancouver, ISO, and other styles
18

Oliveira, João Manuel Barbosa de. "Receiver design for nonlinearly distorted OFDM : signals applications in radio-over-fiber systems." Doctoral thesis, 2011. http://hdl.handle.net/10216/63386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Oliveira, João Manuel Barbosa de. "Receiver design for nonlinearly distorted OFDM : signals applications in radio-over-fiber systems." Tese, 2011. http://hdl.handle.net/10216/63386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Antony, Daniel Sanju. "Performance Analysis of Non Local Means Algorithm using Hardware Accelerators." Thesis, 2016. http://etd.iisc.ac.in/handle/2005/2932.

Full text
Abstract:
Image De-noising forms an integral part of image processing. It is used as a standalone algorithm for improving the quality of the image obtained through camera as well as a starting stage for image processing applications like face recognition, super resolution etc. Non Local Means (NL-Means) and Bilateral Filter are two computationally complex de-noising algorithms which could provide good de-noising results. Due to its computational complexity, the real time applications associated with these letters are limited. In this thesis, we propose the use of hardware accelerators such as GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays) to speed up the filter execution and efficiently implement using them. GPU based implementation of these letters is carried out using Open Computing Language (Open CL). The basic objective of this research is to perform high speed de-noising without compromising on the quality. Here we implement a basic NL-Means filter, a Fast NL-Means filter, and Bilateral filter using Gauss Polynomial decomposition on GPU. We also propose a modification to the existing NL-Means algorithm and Gauss Polynomial Bilateral filter. Instead of Gaussian Spatial Kernel used in standard algorithm, Box Spatial kernel is introduced to improve the speed of execution of the algorithm. This research work is a step forward towards making the real time implementation of these algorithms possible. It has been found from results that the NL-Means implementation on GPU using Open CL is about 25x faster than regular CPU based implementation for larger images (1024x1024). For Fast NL-Means, GPU based implementation is about 90x faster than CPU implementation. Even with the improved execution time, the embedded system application of the NL-Means is limited due to the power and thermal restrictions of the GPU device. In order to create a low power and faster implementation, we have implemented the algorithm on FPGA. FPGAs are reconfigurable devices and enable us to create a custom architecture for the parallel execution of the algorithm. It was found that the execution time for smaller images (256x256) is about 200x faster than CPU implementation and about 25x faster than GPU execution. Moreover the power requirements of the FPGA design of the algorithm (0.53W) is much less compared to CPU(30W) and GPU(200W).
APA, Harvard, Vancouver, ISO, and other styles
21

Antony, Daniel Sanju. "Performance Analysis of Non Local Means Algorithm using Hardware Accelerators." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2932.

Full text
Abstract:
Image De-noising forms an integral part of image processing. It is used as a standalone algorithm for improving the quality of the image obtained through camera as well as a starting stage for image processing applications like face recognition, super resolution etc. Non Local Means (NL-Means) and Bilateral Filter are two computationally complex de-noising algorithms which could provide good de-noising results. Due to its computational complexity, the real time applications associated with these letters are limited. In this thesis, we propose the use of hardware accelerators such as GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays) to speed up the filter execution and efficiently implement using them. GPU based implementation of these letters is carried out using Open Computing Language (Open CL). The basic objective of this research is to perform high speed de-noising without compromising on the quality. Here we implement a basic NL-Means filter, a Fast NL-Means filter, and Bilateral filter using Gauss Polynomial decomposition on GPU. We also propose a modification to the existing NL-Means algorithm and Gauss Polynomial Bilateral filter. Instead of Gaussian Spatial Kernel used in standard algorithm, Box Spatial kernel is introduced to improve the speed of execution of the algorithm. This research work is a step forward towards making the real time implementation of these algorithms possible. It has been found from results that the NL-Means implementation on GPU using Open CL is about 25x faster than regular CPU based implementation for larger images (1024x1024). For Fast NL-Means, GPU based implementation is about 90x faster than CPU implementation. Even with the improved execution time, the embedded system application of the NL-Means is limited due to the power and thermal restrictions of the GPU device. In order to create a low power and faster implementation, we have implemented the algorithm on FPGA. FPGAs are reconfigurable devices and enable us to create a custom architecture for the parallel execution of the algorithm. It was found that the execution time for smaller images (256x256) is about 200x faster than CPU implementation and about 25x faster than GPU execution. Moreover the power requirements of the FPGA design of the algorithm (0.53W) is much less compared to CPU(30W) and GPU(200W).
APA, Harvard, Vancouver, ISO, and other styles
22

Dang, Rajdeep Singh. "Experimental Studies On A New Class Of Combinatorial LDPC Codes." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/523.

Full text
Abstract:
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Dang, Rajdeep Singh. "Experimental Studies On A New Class Of Combinatorial LDPC Codes." Thesis, 2007. http://hdl.handle.net/2005/523.

Full text
Abstract:
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Lam, Carson Y. H. "Multiple copy combining schemes for the additive white Gaussian noise channel." Thesis, 1992. http://hdl.handle.net/2429/3366.

Full text
Abstract:
A generalized version of a memory automatic repeat request (ARQ) scheme [15-221 using hard, soft, and completely soft detection is examined. The communication channel is modeled as an additive white Gaussian noise (AWGN) channel. The following six combining schemes are considered. In Scheme 1, error detection is performed on each of the n received copies. In Scheme 2, all n copies are combined and error detection is performed on the resulting packet. Scheme 3 uses Scheme 1 followed by Scheme 2 (if decoding using Scheme 1 is not successful). In Scheme 4, each incoming packet is combined with the copies received so far, and the resulting combined packet is checked for errors. Scheme 5 uses Schemes 1 followed by Scheme 4. The last scheme, Scheme 6, attempts decoding on up to all 2n- I possible combinations of the received copies. For an arbitrary detector quantization, analytic expressions for the retransmission probability, PϜ of Scheme 3 are derived, and numerical procedures for evaluating the PϜ of Schemes 4, 5 and 6 are described. Numerical examples showing the dependence of PϜ on various parameters, such as the signal to noise ratio (SNR), number of transmissions n, number of quantization levels M and packet length L are given. Although Scheme 6 has the lowest PϜ its decoding complexity increases exponentially with n. The six combining schemes are applied to the Weldon ARQ scheme [13,14,22] It is found that the throughput T is generally improved using multiple copy combining schemes. The throughput can be further increased by using a soft detector. For block error rates BKER > 0.5 and soft detectors, the throughputs of Schemes 2 to 6 are typically higher than that of an ideal selective repeat ARQ system with no packet combining. In general, Scheme 3 appears to be a good choice among all the six schemes with a good performance and a simple decoder structure.
APA, Harvard, Vancouver, ISO, and other styles
25

Ying-Chun, Lai. "A High Performance Additive White Gaussian Noise Generator Using the Wallace Method." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0016-0109200613413708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lai, Ying-Chun, and 賴穎群. "A High Performance Additive White Gaussian Noise Generator Using the Wallace Method." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/08665437379888732157.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
94
Combining the Box-Muller method, Central Limit Theorem, and the Wallace method, a hardware white Gaussian noise generator (WGNG) is proposed to simulate the noise effect appeared in the communication channel and is synthesized in a 0.18um CMOS process. Passing two statistical tests of chi-square test and Kolmogorov-Smirnov (K-S) test, the proposed noise generator can generate high-quality 666.667 million Gaussian random variables per second. It is different from the existing methods that require complex calculations. The proposed design only requires additions, subtractions, and shift operation in the major part. Because of only having simple operations, it is easy to achieve high performance. In addition, the proposed architecture is not only applied to generate the additive white Gaussian noise (AWGN), but also applied to generate the random variables with other distributions such as exponential distributions.
APA, Harvard, Vancouver, ISO, and other styles
27

"Efficient communication over additive white Gaussian noise and intersymbol interference channels using chaotic sequences." Research Laboratory of Electronics, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/4155.

Full text
Abstract:
Brian Chen.
Also issued as Thesis (M. S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 103).
Supported in part by the Dept. of the Navy, Office of the Chief Naval Research. N0014-93-1-0686 Supported in part by the Advanced Research Projects Agency's RASSP program. N00014-95-1-0834 Supported in part by a National Defence Science and Engineering Graduate Fellowship.
APA, Harvard, Vancouver, ISO, and other styles
28

Kishan, Harini. "On Maximizing The Performance Of The Bilateral Filter For Image Denoising." Thesis, 2015. https://etd.iisc.ac.in/handle/2005/2638.

Full text
Abstract:
We address the problem of image denoising for additive white Gaussian noise (AWGN), Poisson noise, and Chi-squared noise scenarios. Thermal noise in electronic circuitry in camera hardware can be modeled as AWGN. Poisson noise is used to model the randomness associated with photon counting during image acquisition. Chi-squared noise statistics are appropriate in imaging modalities such as Magnetic Resonance Imaging (MRI). AWGN is additive, while Poisson noise is neither additive nor multiplicative. Although Chi-squared noise is derived from AWGN statistics, it is non-additive. Mean-square error (MSE) is the most widely used metric to quantify denoising performance. In parametric denoising approaches, the optimal parameters of the denoising function are chosen by employing a minimum mean-square-error (MMSE) criterion. However, the dependence of MSE on the noise-free signal makes MSE computation infeasible in practical scenarios. We circumvent the problem by adopting an MSE estimation approach. The ground-truth-independent estimates of MSE are Stein’s unbiased risk estimate (SURE), Poisson unbiased risk estimate (PURE) and Chi-square unbiased risk estimate (CURE) for AWGN, Poison and Chi-square noise models, respectively. The denoising function is optimized to achieve maximum noise suppression by minimizing the MSE estimates. We have chosen the bilateral filter as the denoising function. Bilateral filter is a nonlinear edge-preserving smoother. The performance of the bilateral filter is governed by the choice of its parameters, which can be optimized to minimize the MSE or its estimate. However, in practical scenarios, MSE cannot be computed due to inaccessibility of the noise-free image. We derive SURE, PURE, and CURE in the context of bilateral filtering and compute the parameters of the bilateral filter that yield the minimum cost (SURE/PURE/CURE). On processing the noisy input with bilateral filter whose optimal parameters are chosen by minimizing MSE estimates (SURE/PURE/CURE), we obtain the estimate closest to the ground truth. We denote the bilateral filter with optimal parameters as SURE-optimal bilateral filter (SOBF), PURE-optimal bilateral filter (POBF) and CURE-optimal bilateral filter (COBF) for AWGN, Poisson and Chi-Squared noise scenarios, respectively. In addition to the globally optimal bilateral filters (SOBF and POBF), we propose spatially adaptive bilateral filter variants, namely, SURE-optimal patch-based bilateral filter (SPBF) and PURE-optimal patch-based bilateral filter (PPBF). SPBF and PPBF yield significant improvements in performance and preserve edges better when compared with their globally-optimal counterparts, SOBF and POBF, respectively. We also propose the SURE-optimal multiresolution bilateral filter (SMBF) where we couple SOBF with wavelet thresholding. For Poisson noise suppression, we propose PURE-optimal multiresolution bilateral filter (PMBF), which is the Poisson counterpart of SMBF. We com-pare the performance of SMBF and PMBF with the state-of-the-art denoising algorithms for AWGN and Poisson noise, respectively. The proposed multiresolution-based bilateral filtering techniques yield denoising performance that is competent with that of the state-of-the-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
29

Kishan, Harini. "On Maximizing The Performance Of The Bilateral Filter For Image Denoising." Thesis, 2015. http://hdl.handle.net/2005/2638.

Full text
Abstract:
We address the problem of image denoising for additive white Gaussian noise (AWGN), Poisson noise, and Chi-squared noise scenarios. Thermal noise in electronic circuitry in camera hardware can be modeled as AWGN. Poisson noise is used to model the randomness associated with photon counting during image acquisition. Chi-squared noise statistics are appropriate in imaging modalities such as Magnetic Resonance Imaging (MRI). AWGN is additive, while Poisson noise is neither additive nor multiplicative. Although Chi-squared noise is derived from AWGN statistics, it is non-additive. Mean-square error (MSE) is the most widely used metric to quantify denoising performance. In parametric denoising approaches, the optimal parameters of the denoising function are chosen by employing a minimum mean-square-error (MMSE) criterion. However, the dependence of MSE on the noise-free signal makes MSE computation infeasible in practical scenarios. We circumvent the problem by adopting an MSE estimation approach. The ground-truth-independent estimates of MSE are Stein’s unbiased risk estimate (SURE), Poisson unbiased risk estimate (PURE) and Chi-square unbiased risk estimate (CURE) for AWGN, Poison and Chi-square noise models, respectively. The denoising function is optimized to achieve maximum noise suppression by minimizing the MSE estimates. We have chosen the bilateral filter as the denoising function. Bilateral filter is a nonlinear edge-preserving smoother. The performance of the bilateral filter is governed by the choice of its parameters, which can be optimized to minimize the MSE or its estimate. However, in practical scenarios, MSE cannot be computed due to inaccessibility of the noise-free image. We derive SURE, PURE, and CURE in the context of bilateral filtering and compute the parameters of the bilateral filter that yield the minimum cost (SURE/PURE/CURE). On processing the noisy input with bilateral filter whose optimal parameters are chosen by minimizing MSE estimates (SURE/PURE/CURE), we obtain the estimate closest to the ground truth. We denote the bilateral filter with optimal parameters as SURE-optimal bilateral filter (SOBF), PURE-optimal bilateral filter (POBF) and CURE-optimal bilateral filter (COBF) for AWGN, Poisson and Chi-Squared noise scenarios, respectively. In addition to the globally optimal bilateral filters (SOBF and POBF), we propose spatially adaptive bilateral filter variants, namely, SURE-optimal patch-based bilateral filter (SPBF) and PURE-optimal patch-based bilateral filter (PPBF). SPBF and PPBF yield significant improvements in performance and preserve edges better when compared with their globally-optimal counterparts, SOBF and POBF, respectively. We also propose the SURE-optimal multiresolution bilateral filter (SMBF) where we couple SOBF with wavelet thresholding. For Poisson noise suppression, we propose PURE-optimal multiresolution bilateral filter (PMBF), which is the Poisson counterpart of SMBF. We com-pare the performance of SMBF and PMBF with the state-of-the-art denoising algorithms for AWGN and Poisson noise, respectively. The proposed multiresolution-based bilateral filtering techniques yield denoising performance that is competent with that of the state-of-the-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography