To see the other types of publications on this topic, follow the link: Sparse signal.

Dissertations / Theses on the topic 'Sparse signal'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sparse signal.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tan, Xing. "Bayesian sparse signal recovery." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Skretting, Karl. "Sparse Signal Representation using Overlapping Frames." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-102.

Full text
Abstract:

Signal expansions using frames may be considered as generalizations of signal representations based on transforms and filter banks. Frames for sparse signal representations may be designed using an iterative method with two main steps: (1) Frame vector selection and expansion coefficient determination for signals in a training set, – selected to be representative of the signals for which compact representations are desired, using the frame designed in the previous iteration. (2) Update of frame vectors with the objective of improving the representation of step (1). In this thesis we solve step (2) of the general frame design problem using the compact notation of linear algebra.

This makes the solution both conceptually and computationally easy, especially for the non-block-oriented frames, – for short overlapping frames, that may be viewed as generalizations of critically sampled filter banks. Also, the solution is more general than those presented earlier, facilitating the imposition of constraints, such as symmetry, on the designed frame vectors. We also take a closer look at step (1) in the design method. Some of the available vector selection algorithms are reviewed, and adaptations to some of these are given. These adaptations make the algorithms better suited for both the frame design method and the sparse representation of signals problem, both for block-oriented and overlapping frames.

The performances of the improved frame design method are shown in extensive experiments. The sparse representation capabilities are illustrated both for one-dimensional and two-dimensional signals, and in both cases the new possibilities in frame design give better results.

Also a new method for texture classification, denoted Frame Texture Classification Method (FTCM), is presented. The main idea is that a frame trained for making sparse representations of a certain class of signals is a model for this signal class. The FTCM is applied to nine test images, yielding excellent overall performance, for many test images the number of wrongly classified pixels is more than halved, in comparison to state of the art texture classification methods presented in [59].

Finally, frames are analyzed from a practical viewpoint, rather than in a mathematical theoretic perspective. As a result of this, some new frame properties are suggested. So far, the new insight this has given has been moderate, but we think that this approach may be useful in frame analysis in the future.

APA, Harvard, Vancouver, ISO, and other styles
3

ABBASI, MUHAMMAD MOHSIN. "Solving Sudoku by Sparse Signal Processing." Thesis, KTH, Signalbehandling, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160908.

Full text
Abstract:
Sudoku is a discrete constraints satisfaction problem which is modeled as an underdetermined linear system. This report focuses on applying some new signal processing approaches to solve sudoku and comparisons to some of the existing approaches are implemented. As our goal is not meant for sudoku only in the long term, we applied approximate solvers using optimization theory methods. A Semi Definite Relaxation (SDR) convex optimization approach was developed for solving sudoku. The idea of Iterative Adaptive Algorithm for Amplitude and Phase Estimation (IAA-APES) from array processing is also being used for sudoku to utilize the sparsity of the sudoku solution as is the case in sensing applications. LIKES and SPICE were also tested on sudoku and their results are compared with l1-norm minimization, weighted l1-norm, and sinkhorn balancing. SPICE and l1-norm are equivalent in terms of accuracy, while SPICE is slower than l1-norm. LIKES and weighted l1-norm are equivalent and better than SPICE and l1-norm in accuracy. SDR proved to be best when the sudoku solutions are unique; however the computational complexity is worst for SDR. The accuracy for IAA-APES is somewhere between SPICE and LIKES and its computation speed is faster than both.
Sudoku är ett diskret bivillkorsproblem som kan modelleras som ett underbestämt ekvationssystem. Denna rapport fokuserar på att tillämpa ett antal nya signalbehandlingsmetoder för att lösa sudoku och att jämföra resultaten med några existerande metoder. Eftersom målet inte enbart är att lösa sudoku, implementerades approximativa lösare baserade på optimeringsteori. En positiv-definit konvex relaxeringsmetod (SDR) för att lösa sudoku utvecklades. Iterativ-adaptiv-metoden för amplitud- och fasskattning (IAA-APES) från gruppantennsignalbehandling användes också för sudoku för att utnyttja glesheten i sudokulösningen på liknande sätt som i mättillämpningen. LIKES och SPICE testades också för sudokuproblemet och resultaten jämfördes med l1-norm-minimiering, viktad l1- norm, och sinkhorn-balancering. SPICE och l1-norm är ekvivalenta i termer av prestanda men SPICE är långsammare. LIKES och viktad l1-norm är ekvivalenta och har bättre noggrannhet än SPICE och l1- norm. SDR visade sig ha bäst prestanda för sudoku med unika lösningar, men SDR är också den metod med beräkningsmässigt högst komplexitet. Prestandan för IAA-APES ligger någonstans mellan SPICE och LIKES men är snabbare än bägge dessa.
APA, Harvard, Vancouver, ISO, and other styles
4

Berinde, Radu. "Advances in sparse signal recovery methods." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/61274.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 96-101).
The general problem of obtaining a useful succinct representation (sketch) of some piece of data is ubiquitous; it has applications in signal acquisition, data compression, sub-linear space algorithms, etc. In this thesis we focus on sparse recovery, where the goal is to recover sparse vectors exactly, and to approximately recover nearly-sparse vectors. More precisely, from the short representation of a vector x, we want to recover a vector x* such that the approximation error ... is comparable to the "tail" min[subscript x'] ... where x' ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years, notably in areas such as data stream computing and compressed sensing. We consider two types of sketches: linear and non-linear. For the linear sketching case, where the compressed representation of x is Ax for a measurement matrix A, we introduce a class of binary sparse matrices as valid measurement matrices. We show that they can be used with the popular geometric " 1 minimization" recovery procedure. We also present two iterative recovery algorithms, Sparse Matching Pursuit and Sequential Sparse Matching Pursuit, that can be used with the same matrices. Thanks to the sparsity of the matrices, the resulting algorithms are much more efficient than the ones previously known, while maintaining high quality of recovery. We also show experiments which establish the practicality of these algorithms. For the non-linear case, we present a better analysis of a class of counter algorithms which process large streams of items and maintain enough data to approximately recover the item frequencies. The class includes the popular FREQUENT and SPACESAVING algorithms. We show that the errors in the approximations generated by these algorithms do not grow with the frequencies of the most frequent elements, but only depend on the remaining "tail" of the frequency vector. Therefore, they provide a non-linear sparse recovery scheme, achieving compression rates that are an order of magnitude better than their linear counterparts.
by Radu Berinde.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Perelli, Alessandro <1985&gt. "Sparse Signal Representation of Ultrasonic Signals for Structural Health Monitoring Applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6321/.

Full text
Abstract:
Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.
APA, Harvard, Vancouver, ISO, and other styles
6

Almshaal, Rashwan M. "Sparse Signal Processing Based Image Compression and Inpainting." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4286.

Full text
Abstract:
In this thesis, we investigate the application of compressive sensing and sparse signal processing techniques to image compression and inpainting problems. Considering that many signals are sparse in certain transformation domain, a natural question to ask is: can an image be represented by as few coefficients as possible? In this thesis, we propose a new model for image compression/decompression based on sparse representation. We suggest constructing an overcomplete dictionary by combining two compression matrices, the discrete cosine transform (DCT) matrix and Hadamard-Walsh transform (HWT) matrix, instead of using only one transformation matrix that has been used by the common compression techniques such as JPEG and JPEG2000. We analyze the Structural Similarity Index (SSIM) versus the number of coefficients, measured by the Normalized Sparse Coefficient Rate (NSCR) for our approach. We observe that using the same NSCR, SSIM for images compressed using the proposed approach is between 4%-17% higher than when using JPEG. Several algorithms have been used for sparse coding. Based on experimental results, Orthogonal Matching Pursuit (OMP) is proved to be the most efficient algorithm in terms of computational time and the quality of the decompressed image. In addition, based on compressive sensing techniques, we propose an image inpainting approach, which could be used to fill missing pixels and reconstruct damaged images. In this approach, we use the Gradient Projection for Sparse Reconstruction (GPSR) algorithm and wavelet transformation with Daubechies filters to reconstruct the damaged images based on the information available in the original image. Experimental results show that our approach outperforms existing image inpainting techniques in terms of computational time with reasonably good image reconstruction performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Lebed, Evgeniy. "Sparse signal recovery in a transform domain." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/4171.

Full text
Abstract:
The ability to efficiently and sparsely represent seismic data is becoming an increasingly important problem in geophysics. Over the last thirty years many transforms such as wavelets, curvelets, contourlets, surfacelets, shearlets, and many other types of ‘x-lets’ have been developed. Such transform were leveraged to resolve this issue of sparse representations. In this work we compare the properties of four of these commonly used transforms, namely the shift-invariant wavelets, complex wavelets, curvelets and surfacelets. We also explore the performance of these transforms for the problem of recovering seismic wavefields from incomplete measurements.
APA, Harvard, Vancouver, ISO, and other styles
8

Charles, Adam Shabti. "Dynamics and correlations in sparse signal acquisition." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53592.

Full text
Abstract:
One of the most important parts of engineered and biological systems is the ability to acquire and interpret information from the surrounding world accurately and in time-scales relevant to the tasks critical to system performance. This classical concept of efficient signal acquisition has been a cornerstone of signal processing research, spawning traditional sampling theorems (e.g. Shannon-Nyquist sampling), efficient filter designs (e.g. the Parks-McClellan algorithm), novel VLSI chipsets for embedded systems, and optimal tracking algorithms (e.g. Kalman filtering). Traditional techniques have made minimal assumptions on the actual signals that were being measured and interpreted, essentially only assuming a limited bandwidth. While these assumptions have provided the foundational works in signal processing, recently the ability to collect and analyze large datasets have allowed researchers to see that many important signal classes have much more regularity than having finite bandwidth. One of the major advances of modern signal processing is to greatly improve on classical signal processing results by leveraging more specific signal statistics. By assuming even very broad classes of signals, signal acquisition and recovery can be greatly improved in regimes where classical techniques are extremely pessimistic. One of the most successful signal assumptions that has gained popularity in recet hears is notion of sparsity. Under the sparsity assumption, the signal is assumed to be composed of a small number of atomic signals from a potentially large dictionary. This limit in the underlying degrees of freedom (the number of atoms used) as opposed to the ambient dimension of the signal has allowed for improved signal acquisition, in particular when the number of measurements is severely limited. While techniques for leveraging sparsity have been explored extensively in many contexts, typically works in this regime concentrate on exploring static measurement systems which result in static measurements of static signals. Many systems, however, have non-trivial dynamic components, either in the measurement system's operation or in the nature of the signal being observed. Due to the promising prior work leveraging sparsity for signal acquisition and the large number of dynamical systems and signals in many important applications, it is critical to understand whether sparsity assumptions are compatible with dynamical systems. Therefore, this work seeks to understand how dynamics and sparsity can be used jointly in various aspects of signal measurement and inference. Specifically, this work looks at three different ways that dynamical systems and sparsity assumptions can interact. In terms of measurement systems, we analyze a dynamical neural network that accumulates signal information over time. We prove a series of bounds on the length of the input signal that drives the network that can be recovered from the values at the network nodes~[1--9]. We also analyze sparse signals that are generated via a dynamical system (i.e. a series of correlated, temporally ordered, sparse signals). For this class of signals, we present a series of inference algorithms that leverage both dynamics and sparsity information, improving the potential for signal recovery in a host of applications~[10--19]. As an extension of dynamical filtering, we show how these dynamic filtering ideas can be expanded to the broader class of spatially correlated signals. Specifically, explore how sparsity and spatial correlations can improve inference of material distributions and spectral super-resolution in hyperspectral imagery~[20--25]. Finally, we analyze dynamical systems that perform optimization routines for sparsity-based inference. We analyze a networked system driven by a continuous-time differential equation and show that such a system is capable of recovering a large variety of different sparse signal classes~[26--30].
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Puxiao. "Distributed sparse signal recovery in networked systems." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4630.

Full text
Abstract:
In this dissertation, two classes of distributed algorithms are developed for sparse signal recovery in large sensor networks. All the proposed approaches consist of local computation (LC) and global computation (GC) steps carried out by a group of distributed local sensors, and do not require the local sensors to know the global sensing matrix. These algorithms are based on the original approximate message passing (AMP) and iterative hard thresholding (IHT) algorithms in the area of compressed sensing (CS), also known as sparse signal recovery. For distributed AMP (DiAMP), we develop a communication-efficient algorithm GCAMP. Numerical results demonstrate that it outperforms the modified thresholding algorithm (MTA), another popular GC algorithm for Top-K query from distributed large databases. For distributed IHT (DIHT), there is a step size $\mu$ which depends on the $\ell_2$ norm of the global sensing matrix A. The exact computation of $\|A\|_2$ is non-separable. We propose a new method, based on the random matrix theory (RMT), to give a very tight statistical upper bound of $\|A\|_2$, and the calculation of that upper bound is separable without any communication cost. In the GC step of DIHT, we develop another algorithm named GC.K, which is also communication-efficient and outperforms MTA. Then, by adjusting the metric of communication cost, which enables transmission of quantized data, and taking advantage of the correlation of data in adjacent iterations, we develop quantized adaptive GCAMP (Q-A-GCAMP) and quantized adaptive GC.K (Q-A-GC.K) algorithms, leading to a significant improvement on communication savings. Furthermore, we prove that state evolution (SE), a fundamental property of AMP that in high dimensionality limit, the output data are asymptotically Gaussian regardless of the distribution of input data, also holds for DiAMP. In addition, compared with the most recent theoretical results that SE holds for sensing matrices with independent subgaussian entries, we prove that the universality of SE can be extended to far more general sensing matrices. These two theoretical results provide strong guarantee of AMP's performance, and greatly broaden its potential applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Zachariah, Dave. "Estimation for Sensor Fusion and Sparse Signal Processing." Doctoral thesis, KTH, Signalbehandling, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121283.

Full text
Abstract:
Progressive developments in computing and sensor technologies during the past decades have enabled the formulation of increasingly advanced problems in statistical inference and signal processing. The thesis is concerned with statistical estimation methods, and is divided into three parts with focus on two different areas: sensor fusion and sparse signal processing. The first part introduces the well-established Bayesian, Fisherian and least-squares estimation frameworks, and derives new estimators. Specifically, the Bayesian framework is applied in two different classes of estimation problems: scenarios in which (i) the signal covariances themselves are subject to uncertainties, and (ii) distance bounds are used as side information. Applications include localization, tracking and channel estimation. The second part is concerned with the extraction of useful information from multiple sensors by exploiting their joint properties. Two sensor configurations are considered here: (i) a monocular camera and an inertial measurement unit, and (ii) an array of passive receivers. New estimators are developed with applications that include inertial navigation, source localization and multiple waveform estimation. The third part is concerned with signals that have sparse representations. Two problems are considered: (i) spectral estimation of signals with power concentrated to a small number of frequencies,and (ii) estimation of sparse signals that are observed by few samples, including scenarios in which they are linearly underdetermined. New estimators are developed with applications that include spectral analysis, magnetic resonance imaging and array processing.

QC 20130426

APA, Harvard, Vancouver, ISO, and other styles
11

Yamada, Randy Matthew. "Identification of Interfering Signals in Software Defined Radio Applications Using Sparse Signal Reconstruction Techniques." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50609.

Full text
Abstract:
Software-defined radios have the agility and flexibility to tune performance parameters, allowing them to adapt to environmental changes, adapt to desired modes of operation, and provide varied functionality as needed.  Traditional software-defined radios use a combination of conditional processing and software-tuned hardware to enable these features and will critically sample the spectrum to ensure that only the required bandwidth is digitized.  While flexible, these systems are still constrained to perform only a single function at a time and digitize a single frequency sub-band at time, possibly limiting the radio\'s effectiveness.  
Radio systems commonly tune hardware manually or use software controls to digitize sub-bands as needed, critically sampling those sub-bands according to the Nyquist criterion.  Recent technology advancements have enabled efficient and cost-effective over-sampling of the spectrum, allowing all bandwidths of interest to be captured for processing simultaneously, a process known as band-sampling.  Simultaneous access to measurements from all of the frequency sub-bands enables both awareness of the spectrum and seamless operation between radio applications, which is critical to many applications.  Further, more information may be obtained for the spectral content of each sub-band from measurements of other sub-bands that could improve performance in applications such as detecting the presence of interference in weak signal measurements.    
This thesis presents a new method for confirming the source of detected energy in weak signal measurements by sampling them directly, then estimating their expected effects.  First, we assume that the detected signal is located within the frequency band as measured, and then we assume that the detected signal is, in fact, interference perceived as a result of signal aliasing.  By comparing the expected effects to the entire measurement and assuming the power spectral density of the digitized bandwidth is sparse, we demonstrate the capability to identify the true source of the detected energy.  We also demonstrate the ability of the method to identify interfering signals not by explicitly sampling them, but rather by measuring the signal aliases that they produce.  Finally, we demonstrate that by leveraging techniques developed in the field of Compressed Sensing, the method can recover signal aliases by analyzing less than 25 percent of the total spectrum.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Jafri, Ahsan. "Array signal processing based on traditional and sparse arrays." Thesis, University of Sheffield, 2019. http://etheses.whiterose.ac.uk/23072/.

Full text
Abstract:
Array signal processing is based on using an array of sensors to receive the impinging signals. The received data is either spatially filtered to focus the signals from a desired direction or it may be used for estimating a parameter of source signal like direction of arrival (DOA), polarization and source power. Spatial filtering also known as beamforming and DOA estimation are integral parts of array signal processing and this thesis is aimed at solving some key probems related to these two areas. Wideband beamforming holds numerous applications in the bandwidth hungry data traffic of present day world. Several techniques exist to design fixed wideband beamformers based on traditional arrays like uniform linear array (ULA). Among these techniques, least squares based eigenfilter method is a key technique which has been used extensively in filter and wideband beamformer design. The first contribution of this thesis comes in the form of critically analyzing the standard eigenfilter method where a serious flaw in the design formulation is highlighted which generates inconsistent design performance, and an additional constraint is added to stabilize the achieved design. Simulation results show the validity and significance of the proposed method. Traditional arrays based on ULAs have limited applications in array signal processing due to the large number of sensors required and this problem has been addressed by the application of sparse arrays. Sparse arrays have been exploited from the perspective of their difference co-array structures which provide significantly higher number of degrees of freedoms (DOFs) compared to ULAs for the same number of sensors. These DOFs (consecutive and unique lags) are utilized in the application of DOA estimation with the help of difference co-array based DOA estimators. Several types of sparse arrays include minimum redundancy array (MRA), minimum hole array (MHA), nested array, prototype coprime array, conventional coprime array, coprime array with compressed interelement spacing (CACIS), coprime array with displaced subarrays (CADiS) and super nested array. As a second contribution of this thesis, a new sparse array termed thinned coprime array (TCA) is proposed which holds all the properties of a conventional coprime array but with $\ceil*{\frac{M}{2}}$ fewer sensors where $M$ is the number of sensors of a subarray in the conventional structure. TCA possesses improved level of sparsity and is robust against mutual coupling compared to other sparse arrays. In addition, TCA holds higher number of DOFs utilizable for DOA estimation using variety of methods. TCA also shows lower estimation error compared to super nested arrays and MRA with increasing array size. Although TCA holds numerous desirable features, the number of unique lags offered by TCA are close to the sparsest CADiS and nested array and significantly lower than MRA which limits the estimation error performance offered by TCA through (compressive sensing) CS-based methods. In this direction, the structure of TCA is studied to explore the possibility of an array which can provide significantly higher number of unique lags with improved sparsity for a given number of sensors. The result of this investigation is the third contribution of this thesis in the form of a new sparse array, displaced thinned coprime array with additional sensor (DiTCAAS), which is based on a displaced version of TCA. The displacement of the subarrays generates an increase in the unique lags but the minimum spacing between the sensors becomes an integer multiple of half wavelength. To avoid spatial aliasing, an additional sensor is added at half wavelength from one of the sensors of the displaced subarray. The proposed placement of the additional sensor generates significantly higher number of unique lags for DiTCAAS, even more than the DOFs provided by MRA. Due to its improved sparsity and higher number of unique lags, DiTCAAS generates the lowest estimation error and robustness against heavy mutual coupling compared to super nested arrays, MRA, TCA and sparse CADiS with CS-based DOA estimation.
APA, Harvard, Vancouver, ISO, and other styles
13

Maraš, Mirjana. "Learning efficient signal representation in sparse spike-coding networks." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE023.

Full text
Abstract:
La complexité de l’entrée sensorielle est parallèle à la complexité de sa représentation dans l’activité neurale des systèmes biologiques. Partant de l’hypothèse que les réseaux biologiques sont réglés pour atteindre une efficacité et une robustesse maximales, nous étudions comment une représentation efficace peut être réalisée dans des réseaux avec des probabilités de connexion locale et une dynamique synaptique observée de manière expérimentale. Nous développons une règle synaptique locale régularisée de type Lasso, qui optimise le nombre et l’efficacité des connexions récurrentes. Les connexions qui affectent le moins le rendement sont élaguées, et la force des connexions restantes est optimisée pour une meilleure représentation du signal. Notre théorie prédit que la probabilité de connexion locale détermine le compromis entre le nombre de potentiels d’action de la population et le nombre de connexions synaptiques qui sont développées et maintenues dans le réseau. Les réseaux plus faiblement connectés représentent des signaux avec des fréquences de déclenchement plus élevées que ceux avec une connectivité plus dense. La variabilité des probabilités de connexion observées dans les réseaux biologiques pourrait alors être considérée comme une conséquence de ce compromis et serait liée à différentes conditions de fonctionnement des circuits. Les connexions récurrentes apprises sont structurées et la plupart des connexions sont réciproques. La dimensionnalité des poids synaptiques récurrents peut être déduite de la probabilité de connexion du réseau et de la dimensionnalité du stimulus. La connectivité optimale d’un réseau avec des délais synaptiques se situe quelque part à un niveau intermédiaire, ni trop faible ni trop dense. De plus, lorsque nous ajoutons une autre contrainte biologique comme la régulation des taux de décharge par adaptation, notre règle d’apprentissage conduit à une mise à l’échelle observée de manière expérimentale des poids synaptiques. Nos travaux soutiennent l’idée que les micro-circuits biologiques sont hautement organisés et qu’une étude détaillée de leur organisation nous aidera à découvrir les principes de la représentation sensorielle
The complexity of sensory input is paralleled by the complexity of its representation in the neural activity of biological systems. Starting from the hypothesis that biological networks are tuned to achieve maximal efficiency and robustness, we investigate how efficient representation can be accomplished in networks with experimentally observed local connection probabilities and synaptic dynamics. We develop a Lasso regularized local synaptic rule, which optimizes the number and efficacy of recurrent connections. The connections that impact the efficiency the least are pruned, and the strength of the remaining ones is optimized for efficient signal representation. Our theory predicts that the local connection probability determines the trade-off between the number of population spikes and the number of recurrent synapses, which are developed and maintained in the network. The more sparsely connected networks represent signals with higher firing rates than those with denser connectivity. The variability of observed connection probabilities in biological networks could then be seen as a consequence of this trade-off, and related to different operating conditions of the circuits. The learned recurrent connections are structured, with most connections being reciprocal. The dimensionality of the recurrent weights can be inferred from the network’s connection probability and the dimensionality of the feedforward input. The optimal connectivity of a network with synaptic delays is somewhere at an intermediate level, neither too sparse nor too dense. Furthermore, when we add another biological constraint, adaptive regulation of firing rates, our learning rule leads to an experimentally observed scaling of the recurrent weights. Our work supports the notion that biological micro-circuits are highly organized and principled. A detailed examination of the local circuit organization can help us uncover the finer aspects of the principles which govern sensory representation
APA, Harvard, Vancouver, ISO, and other styles
14

Bailey, Eric Stanton. "Sparse Frequency Laser Radar Signal Modeling and Doppler Processing." University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271937372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Karseras, Evripidis. "Hierarchical Bayesian models for sparse signal recovery and sampling." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/32102.

Full text
Abstract:
This thesis builds upon the problem of sparse signal recovery from the Bayesian standpoint. The advantages of employing Bayesian models are underscored, with the most important being the ease at which a model can be expanded or altered; leading to a fresh class of algorithms. The thesis fills out several gaps between sparse recovery algorithms and sparse Bayesian models; firstly the lack of global performance guarantees for the latter and secondly what the signifying differences are between the two. These questions are answered by providing; a refined theoretical analysis and a new class of algorithms that combines the benefits from classic recovery algorithms and sparse Bayesian modelling. The said Bayesian techniques find application in tracking dynamic sparse signals, something impossible under the Kalman filter approach. Another innovation of this thesis are Bayesian models for signals whose components are known a priori to exhibit a certain statistical trend. These situations require that the model enforces a given statistical bias on the solutions. Existing Bayesian models can cope with this input, but the algorithms to carry out the task are computationally expensive. Several ways are proposed to remedy the associated problems while still attaining some form of optimality. The proposed framework finds application in multipath channel estimation with some very promising results. Not far from the same area lies that of Approximate Message Passing. This includes extremely low-complexity algorithms for sparse recovery with a powerful analysis framework. Some results are derived, regarding the differences between these approximate methods and the aforementioned models. This can be seen as preliminary work for future research. Finally, the thesis presents a hardware implementation of a wideband spectrum analyser based on sparse recovery methods. The hardware consists of a Field-Programmable Gate Array coupled with an Analogue to Digital Converter. Some critical results are drawn, regarding the gains and viability of such methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Dong, Jing. "Sparse analysis model based dictionary learning and signal reconstruction." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/811095/.

Full text
Abstract:
Sparse representation has been studied extensively in the past decade in a variety of applications, such as denoising, source separation and classification. Earlier effort has been focused on the well-known synthesis model, where a signal is decomposed as a linear combination of a few atoms of a dictionary. However, the analysis model, a counterpart of the synthesis model, has not received much attention until recent years. The analysis model takes a different viewpoint to sparse representation, and it assumes that the product of an analysis dictionary and a signal is sparse. Compared with the synthesis model, this model tends to be more expressive to represent signals, as a much richer union of subspaces can be described. This thesis focuses on the analysis model and aims to address the two main challenges: analysis dictionary learning (ADL) and signal reconstruction. In the ADL problem, the dictionary is learned from a set of training samples so that the signals can be represented sparsely based on the analysis model, thus offering the potential to fit the signals better than pre-defined dictionaries. Among the existing ADL algorithms, such as the well-known Analysis K-SVD, the dictionary atoms are updated sequentially. The first part of this thesis presents two novel analysis dictionary learning algorithms to update the atoms simultaneously. Specifically, the Analysis Simultaneous Codeword Optimization (Analysis SimCO) algorithm is proposed, by adapting the SimCO algorithm which is proposed originally for the synthesis model. In Analysis SimCO, the dictionary is updated using optimization on manifolds, under the $\ell_2$-norm constraints on the dictionary atoms. This framework allows multiple dictionary atoms to be updated simultaneously in each iteration. However, similar to the existing ADL algorithms, the dictionary learned by Analysis SimCO may contain similar atoms. To address this issue, Incoherent Analysis SimCO is proposed by employing a coherence constraint and introducing a decorrelation step to enforce this constraint. The competitive performance of the proposed algorithms is demonstrated in the experiments for recovering synthetic dictionaries and removing additional noise in images, as compared with existing ADL methods. The second part of this thesis studies how to reconstruct signals with learned dictionaries under the analysis model. This is demonstrated by a challenging application problem: multiplicative noise removal (MNR) of images. In the existing sparsity motivated methods, the MNR problem is addressed using pre-defined dictionaries, or learned dictionaries based on the synthesis model. However, the potential of analysis dictionary learning for the MNR problem has not been investigated. In this thesis, analysis dictionary learning is applied to MNR, leading to two new algorithms. In the first algorithm, a dictionary learned based on the analysis model is employed to form a regularization term, which can preserve image details while removing multiplicative noise. In the second algorithm, in order to further improve the recovery quality of smooth areas in images, a smoothness regularizer is introduced to the reconstruction formulation. This regularizer can be seen as an enhanced Total Variation (TV) term with an additional parameter controlling the level of smoothness. To address the optimization problem of this model, the Alternating Direction Method of Multipliers (ADMM) is adapted and a relaxation technique is developed to allow variables to be updated flexibly. Experimental results show the superior performance of the proposed algorithms as compared with three sparsity or TV based algorithms for a range of noise levels.
APA, Harvard, Vancouver, ISO, and other styles
17

Diethe, T. R. "Sparse machine learning methods with applications in multivariate signal processing." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/20450/.

Full text
Abstract:
This thesis details theoretical and empirical work that draws from two main subject areas: Machine Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application of sparse machine learning methods to multivariate signal processing. In particular, methods that enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility. The methods presented can be seen as modular building blocks that can be applied to a variety of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set of multivariate signals. In addition to testing on benchmark datasets, a series of empirical evaluations on real world datasets were carried out. These included: the classification of musical genre from polyphonic audio files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed Sensing (CS); analysis of human perception of different modulations of musical key from Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate the efficacy of the framework and highlight interesting directions of future research.
APA, Harvard, Vancouver, ISO, and other styles
18

Crandall, Robert. "Nonlocal and Randomized Methods in Sparse Signal and Image Processing." Thesis, The University of Arizona, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10840330.

Full text
Abstract:

This thesis focuses on the topics of sparse and non-local signal and image processing. In particular, I present novel algorithms that exploit a combination of sparse and non-local data models to perform tasks such as compressed-sensing reconstruction, image compression, and image denoising. The contributions in this thesis are: (1) a fast, approximate minimum mean-squared error (MMSE) estimation algorithm for sparse signal reconstruction, called Randomized Iterative Hard Thresholding (RIHT). This algorithm has applications in compressed sensing, image denoising, and other sparse inverse problems. (2) An extension to the Block-Matching 3D (BM3D) denoising algorithm that matches blocks at different rotation angles. This algorithm improves on the performance of BM3D in terms of both visual quality and quantitative denoising accuracy. (3) A novel non-local, causal image prediction algorithm, and a corresponding codec implementation that achieves state of the art lossless compression performance on 8-bit grayscale images. (4) A deep convolutional neural network (CNN) architecture that achieves state-of-the-art results in bilnd image denoising, and a novel non-local deep network architecture that further improves performance.

APA, Harvard, Vancouver, ISO, and other styles
19

Porter, Richard J. "Non-Gaussian and block based models for sparse signal recovery." Thesis, University of Bristol, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.702908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hargreaves, Brock Edward. "Sparse signal recovery : analysis and synthesis formulations with prior support information." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46448.

Full text
Abstract:
The synthesis model for signal recovery has been the model of choice for many years in compressive sensing. Various weighting schemes using prior support information to adjust the objective function associated with the synthesis model have been shown to improve the recovery of the signal in terms of accuracy. Generally, even with no prior knowledge of the support, iterative methods can build support estimates and incorporate that into the recovery which has also been shown to increase the speed and accuracy of the recovery. However when the original signal is sparse with respect to a redundant dictionary (rather than an orthonormal basis) there is a coun- terpart model to synthesis, namely the analysis model, which has been less popular but has recently attracted more attention. The analysis model is much less understood and thus there are fewer theorems available in both the context of non-weighted and weighted signal recovery. In this thesis, we investigate weighting in both the analysis model and synthesis model in weighted l-1 minimization. Theoretical guarantees on reconstruction and various weighting strategies for each model are discussed. We give conditions for weighted synthesis recovery with frames which do not require strict incoherency conditions, this is based on recent results of regular synthesis with frames using optimal dual l-1 analysis. A novel weighting technique is introduced in the analysis case which outperforms its traditional counterparts in the case of seismic wavefield reconstruction. We also introduce a weighted split Bregman algorithm for analysis and optimal dual analysis. We then investigate these techniques on seismic data and synthetically created test data using a variety of frames.
APA, Harvard, Vancouver, ISO, and other styles
21

NETO, MARIO HENRIQUE ALVES SOUTO. "SPARSE STATISTICAL MODELLING WITH APPLICATIONS TO RENEWABLE ENERGY AND SIGNAL PROCESSING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=24980@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Motivado pelos desafios de processar a grande quantidade de dados disponíveis, pesquisas recentes em estatística tem sugerido novas técnicas de modelagem e inferência. Paralelamente, outros campos como processamento de sinais e otimização também estão produzindo métodos para lidar problemas em larga escala. Em particular, este trabalho é focado nas teorias e métodos baseados na regularização l1. Após uma revisão compreensiva da norma l1 como uma ferramenta para definir soluções esparsas, estudaremos mais a fundo o método LASSO. Para exemplificar como o LASSO possui uma ampla gama de aplicações, exibimos um estudo de caso em processamento de sinal esparso. Baseado nesta idea, apresentamos o l1 level-slope filter. Resultados experimentais são apresentados para uma aplicação em transmissão de dados via fibra óptica. Para a parte final da dissertação, um novo método de estimação é proposto para modelos em alta dimensão com variância periódica. A principal ideia desta nova metodologia é combinar esparsidade, induzida pela regularização l1, com o método de máxima verossimilhança. Adicionalmente, esta metodologia é utilizada para estimar os parâmetros de um modelo mensal estocástico de geração de energia eólica e hídrica. Simulações e resultados de previsão são apresentados para um estudo real envolvendo cinquenta geradores de energia renovável do sistema Brasileiro.
Motivated by the challenges of processing the vast amount of available data, recent research on the ourishing field of high-dimensional statistics is bringing new techniques for modeling and drawing inferences over large amounts of data. Simultaneously, other fields like signal processing and optimization are also producing new methods to deal with large scale problems. More particularly, this work is focused on the theories and methods based on l1-regularization. After a comprehensive review of the l1-norm as tool for finding sparse solutions, we study more deeply the LASSO shrinkage method. In order to show how the LASSO can be used for a wide range of applications, we exhibit a case study on sparse signal processing. Based on this idea, we present the l1 level-slope filter. Experimental results are given for an application on the field of fiber optics communication. For the final part of the thesis, a new estimation method is proposed for high-dimensional models with periodic variance. The main idea of this novel methodology is to combine sparsity, induced by the l1-regularization, with the maximum likelihood criteria. Additionally, this novel methodology is used for building a monthly stochastic model for wind and hydro inow. Simulations and forecasting results for a real case study involving fifty Brazilian renewable power plants are presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Malioutov, Dmitry M. 1981. "A sparse signal reconstruction perspective for source localization with sensor arrays." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87445.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 167-172).
by Dmitry M. Malioutov.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
23

Katireddy, Harshitha Reddy, and Sreemanth Sidda. "A Novel Shoeprint Enhancement method for Forensic Evidence Using Sparse Representation method." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15620.

Full text
Abstract:
Shoeprints are often recovered at crime scenes and are the most abundant form of evidence at a crime scene, and in some cases, it is proved to be as accurate as fingerprints. The basis for shoeprint impression evidence is determining the source of a shoeprint impression recovered from a crime scene. This shoeprint evidence collected are often noisy and unclear. To obtain a clear image, the shoeprint evidence should be enhanced by de-noising and improving the quality of the picture. In the thesis, we introduced a novel shoeprint enhancement algorithm based on sparse representation for obtaining the complete dictionary from a set of shoeprint patches which allows us to represent them as a sparse linear combination of dictionary atoms. In the proposed algorithm, we first pre-process the image by SMQT method, and then Daubechies first level DWT is applied. The SVD of the image is computed, and Inverse Discrete Wavelet Transform(IDWT) is applied. To the singular value decomposed image, l1-norm minimization sparse representation employed by the K-SVD algorithm is computed where the image is divided into predefined shoeprint image patches of size 8 by 8. Shoeprint images of three different databases with different image quality are tested. The performance of the algorithm is assessed by comparing the original shoeprint image and the image obtained after proposed algorithm based on objective and subjective parameters like PSNR, MSE, and MOS. The results show the proposed method gives better performance in terms of contrast (Variance) and brightness (Mean). Finally, as a conclusion, we state that the proposed algorithm enhances the image better than the existing method DWT-SVD.
APA, Harvard, Vancouver, ISO, and other styles
24

Samarasinghe, Kasun M. "Sparse Signal Reconstruction Modeling for MEG Source Localization Using Non-convex Regularizers." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439304367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Davis, Philip. "Quantifying the Gains of Compressive Sensing for Telemetering Applications." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595775.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
In this paper we study a new streaming Compressive Sensing (CS) technique that aims to replace high speed Analog to Digital Converters (ADC) for certain classes of signals and reduce the artifacts that arise from block processing when conventional CS is applied to continuous signals. We compare the performance of both streaming and block processing methods on several types of signals and quantify the signal reconstruction quality when packet loss is applied to the transmitted sampled data.
APA, Harvard, Vancouver, ISO, and other styles
26

Seiler, Jürgen [Verfasser]. "Signal Extrapolation Using Sparse Representations and its Applications in Video Communication / Jürgen Seiler." München : Verlag Dr. Hut, 2011. http://d-nb.info/1018982744/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shekaramiz, Mohammad. "Sparse Signal Recovery Based on Compressive Sensing and Exploration Using Multiple Mobile Sensors." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7384.

Full text
Abstract:
The work in this dissertation is focused on two areas within the general discipline of statistical signal processing. First, several new algorithms are developed and exhaustively tested for solving the inverse problem of compressive sensing (CS). CS is a recently developed sub-sampling technique for signal acquisition and reconstruction which is more efficient than the traditional Nyquist sampling method. It provides the possibility of compressed data acquisition approaches to directly acquire just the important information of the signal of interest. Many natural signals are sparse or compressible in some domain such as pixel domain of images, time, frequency and so forth. The notion of compressibility or sparsity here means that many coefficients of the signal of interest are either zero or of low amplitude, in some domain, whereas some are dominating coefficients. Therefore, we may not need to take many direct or indirect samples from the signal or phenomenon to be able to capture the important information of the signal. As a simple example, one can think of a system of linear equations with N unknowns. Traditional methods suggest solving N linearly independent equations to solve for the unknowns. However, if many of the variables are known to be zero or of low amplitude, then intuitively speaking, there will be no need to have N equations. Unfortunately, in many real-world problems, the number of non-zero (effective) variables are unknown. In these cases, CS is capable of solving for the unknowns in an efficient way. In other words, it enables us to collect the important information of the sparse signal with low number of measurements. Then, considering the fact that the signal is sparse, extracting the important information of the signal is the challenge that needs to be addressed. Since most of the existing recovery algorithms in this area need some prior knowledge or parameter tuning, their application to real-world problems to achieve a good performance is difficult. In this dissertation, several new CS algorithms are proposed for the recovery of sparse signals. The proposed algorithms mostly do not require any prior knowledge on the signal or its structure. In fact, these algorithms can learn the underlying structure of the signal based on the collected measurements and successfully reconstruct the signal, with high probability. The other merit of the proposed algorithms is that they are generally flexible in incorporating any prior knowledge on the noise, sparisty level, and so on. The second part of this study is devoted to deployment of mobile sensors in circumstances that the number of sensors to sample the entire region is inadequate. Therefore, where to deploy the sensors, to both explore new regions while refining knowledge in aleady visited areas is of high importance. Here, a new framework is proposed to decide on the trajectories of sensors as they collect the measurements. The proposed framework has two main stages. The first stage performs interpolation/extrapolation to estimate the phenomenon of interest at unseen loactions, and the second stage decides on the informative trajectory based on the collected and estimated data. This framework can be applied to various problems such as tuning the constellation of sensor-bearing satellites, robotics, or any type of adaptive sensor placement/configuration problem. Depending on the problem, some modifications on the constraints in the framework may be needed. As an application side of this work, the proposed framework is applied to a surrogate problem related to the constellation adjustment of sensor-bearing satellites.
APA, Harvard, Vancouver, ISO, and other styles
28

Sudhakara, Murthy Prasad. "Sparse models and convex optimisation for convolutive blind source separation." Rennes 1, 2011. https://tel.archives-ouvertes.fr/tel-00586610.

Full text
Abstract:
Blind source separation from underdetermined mixtures is usually a two-step process: the estimation of the mixing filters, followed by that of the sources. An enabling assumption is that the sources are sparse and disjoint in the time-frequency domain. For convolutive mixtures, the solution is not straightforward due to the permutation and scaling ambiguities. The sparsity of the filters in the time-domain is also an enabling factor for blind filter estimation approaches that are based on cross-relation. However, such approaches are restricted to the single source setting. In this thesis, we jointly exploit the sparsity of the sources and mixing filters for blind estimation of sparse filters from stereo convolutive mixtures of several sources. First, we show why the sparsity of the filters can help solve the permutation problem in convolutive source separation, in the absence of scaling. Then, we propose a twostage estimation framework, which is primarily based on the time-frequency domain cross-relation and an ℓ1 minimisation formulation: a) a clustering step to group the time-frequency points where only one source is active, for each source; b) a convex optimisation step which estimates the filters. The resulting algorithms are assessed on audio source separation and filter estimation problems
La séparation aveugle de sources à partir de mélanges sous-déterminés se fait traditionnellement en deux étapes: l’estimation des filtres de mélange, puis celle des sources. L’hypothèse de parcimonie temps-fréquence des sources facilite la séparation, qui reste cependant difficile dans le cas de mélanges convolutifs à cause des ambiguités de permutation et de mise à l’échelle. Par ailleurs, la parcimonie temporelle des filtres facilite les techniques d’estimation aveugle de filtres fondées sur des corrélations croisées, qui restent cependant limitées au cas où une seule source est active. Dans cette thèse, on exploite conjointement la parcimonie des sources et des filtres de mélange pour l’estimation aveugle de filtres parcimonieux à partir de mélanges convolutifs stéréophoniques de plusieurs sources. Dans un premier temps, on montre comment la parcimonie des filtres permet de résoudre le problème de permutation, en l’absence de problème de mise à l’échelle. Ensuite, on propose un cadre constitu é de deux étapes pour l’estimation, basé sur des versions temps-fréquence de la corrélation croisée et sur la minimisation de norme ℓ1 : a) un clustering qui regroupe les points temps-fréquence où une seule source est active; b) la résolution d’un problème d’optimisation convexe pour estimer les filtres. La performance des algorithmes qui en résultent est évalués numériquement sur des problèmes de filtre d’estimation de filtres et de séparation de sources audio
APA, Harvard, Vancouver, ISO, and other styles
29

Martinez, Juan Enrique Castorera. "Remote-Sensed LIDAR Using Random Sampling and Sparse Reconstruction." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595760.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
In this paper, we propose a new, low complexity approach for the design of laser radar (LIDAR) systems for use in applications in which the system is wirelessly transmitting its data from a remote location back to a command center for reconstruction and viewing. Specifically, the proposed system collects random samples in different portions of the scene, and the density of sampling is controlled by the local scene complexity. The range samples are transmitted as they are acquired through a wireless communications link to a command center and a constrained absolute-error optimization procedure of the type commonly used for compressive sensing/sampling is applied. The key difficulty in the proposed approach is estimating the local scene complexity without densely sampling the scene and thus increasing the complexity of the LIDAR front end. We show here using simulated data that the complexity of the scene can be accurately estimated from the return pulse shape using a finite moments approach. Furthermore, we find that such complexity estimates correspond strongly to the surface reconstruction error that is achieved using the constrained optimization algorithm with a given number of samples.
APA, Harvard, Vancouver, ISO, and other styles
30

Asif, Muhammad Salman. "Dynamic compressive sensing: sparse recovery algorithms for streaming signals and video." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49106.

Full text
Abstract:
This thesis presents compressive sensing algorithms that utilize system dynamics in the sparse signal recovery process. These dynamics may arise due to a time-varying signal, streaming measurements, or an adaptive signal transform. Compressive sensing theory has shown that under certain conditions, a sparse signal can be recovered from a small number of linear, incoherent measurements. The recovery algorithms, however, for the most part are static: they focus on finding the solution for a fixed set of measurements, assuming a fixed (sparse) structure of the signal. In this thesis, we present a suite of sparse recovery algorithms that cater to various dynamical settings. The main contributions of this research can be classified into the following two categories: 1) Efficient algorithms for fast updating of L1-norm minimization problems in dynamical settings. 2) Efficient modeling of the signal dynamics to improve the reconstruction quality; in particular, we use inter-frame motion in videos to improve their reconstruction from compressed measurements. Dynamic L1 updating: We present homotopy-based algorithms for quickly updating the solution for various L1 problems whenever the system changes slightly. Our objective is to avoid solving an L1-norm minimization program from scratch; instead, we use information from an already solved L1 problem to quickly update the solution for a modified system. Our proposed updating schemes can incorporate time-varying signals, streaming measurements, iterative reweighting, and data-adaptive transforms. Classical signal processing methods, such as recursive least squares and the Kalman filters provide solutions for similar problems in the least squares framework, where each solution update requires a simple low-rank update. We use homotopy continuation for updating L1 problems, which requires a series of rank-one updates along the so-called homotopy path. Dynamic models in video: We present a compressive-sensing based framework for the recovery of a video sequence from incomplete, non-adaptive measurements. We use a linear dynamical system to describe the measurements and the temporal variations of the video sequence, where adjacent images are related to each other via inter-frame motion. Our goal is to recover a quality video sequence from the available set of compressed measurements, for which we exploit the spatial structure using sparse representations of individual images in a spatial transform and the temporal structure, exhibited by dependencies among neighboring images, using inter-frame motion. We discuss two problems in this work: low-complexity video compression and accelerated dynamic MRI. Even though the processes for recording compressed measurements are quite different in these two problems, the procedure for reconstructing the videos is very similar.
APA, Harvard, Vancouver, ISO, and other styles
31

Axer, Steffen [Verfasser]. "Estimating Traffic Signal States by Exploiting Sparse Low-Frequency Floating Car Data / Steffen Axer." Aachen : Shaker, 2017. http://d-nb.info/1149278625/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Tianming. "Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6331.

Full text
Abstract:
Spectrally sparse signals arise in many applications of signal processing. A spectrally sparse signal is a mixture of a few undamped or damped complex sinusoids. An important problem from practice is to reconstruct such a signal from partial time domain samples. Previous convex methods have the drawback that the computation and storage costs do not scale well with respect to the signal length. This common drawback restricts their applicabilities to large and high-dimensional signals. The reconstruction of a spectrally sparse signal from partial samples can be formulated as a low-rank Hankel matrix completion problem. We develop two fast and provable non-convex solvers, FIHT and PGD. FIHT is based on Riemannian optimization while PGD is based on Burer-Monteiro factorization with projected gradient descent. Suppose the underlying spectrally sparse signal is of model order r and length n. We prove that O(r^2log^2(n)) and O(r^2log(n)) random samples are sufficient for FIHT and PGD respectively to achieve exact recovery with overwhelming probability. Every iteration, the computation and storage costs of both methods are linear with respect to signal length n. Therefore they are suitable for handling spectrally sparse signals of large size, which may be prohibited for previous convex methods. Extensive numerical experiments verify their recovery abilities as well as computation efficiency, and also show that the algorithms are robust to noise and mis-specification of the model order. Comparing the two solvers, FIHT is faster for easier problems while PGD has a better recovery ability.
APA, Harvard, Vancouver, ISO, and other styles
33

Apostolopoulos, Theofanis. "Heuristics for computing sparse solutions for ill-posed inverse problems in signal and image recovery." Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/heuristics-for-computing-sparse-solutions-for-illposed-inverse-problems-in-signal-and-image-recovery(acfde268-5d4e-4c6a-8a15-f94b33b62c72).html.

Full text
Abstract:
For almost a century the famous theorem of Shannon-Nyquist has been very important in digital signal processing applications as the basis for the number of samples required to efficiently reconstruct any type of signal, such as speech and image data. However, signals and images are mainly stored and processed in huge files, which require more storage space, they take longer to transmit and they demand a large computational cost for processing. For this purpose many compression techniques have been introduced including the emerging field of Compressed Sensing (CS). CS is a novel and fast sampling and recovery process, which has attracted considerable research interest with several new application areas. By exploiting the signal and the measurements structure we are able to recover a signal from what was previously considered as highly under-sampled measurements, according to the Shannon- Nyquist criterion. This reconstruction is accomplished by finding the sparsest solution for an ill-posed system of linear equations, which is an NP-hard combinatorial optimisation problem. This thesis focuses on the l0-norm based minimisation problem which arises from sparse signal or image recovery, using the CS technique. A new, fast heuristic is proposed to directly minimise a continuous function of the l0 norm. This swarm based stochastic method provides better sparse solutions for highly under-sampled and over-sampled cases even under the presence of noise with small error and less time complexity, compared with several well-known competing approaches. The evaluation methodology includes different parameters of the l0-heuristic and is based on measuring recovery error and execution time under various sparsity levels, sample sizes, sampling matrices and transform domains. The mathematical background of CS, including the key aspects of sparsity and incoherence in measurements are also provided, together with applications and further open research questions, such as weakly sparse signals in noisy environments.
APA, Harvard, Vancouver, ISO, and other styles
34

Thompson, Andrew J. "Quantitative analysis of algorithms for compressed signal recovery." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/9603.

Full text
Abstract:
Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled nonadaptive linear measurements taken at a rate proportional to the signal's true information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been established, both theoretically and empirically, that certain optimization algorithms are able to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007), which is the focus of this thesis, is an established CS recovery algorithm which is known to be effective in practice, both in terms of recovery performance and computational efficiency. However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case recovery conditions have not yet been quantified in terms of the sparsity/undersampling trade-off, and also there is a need for average-case analysis in order to understand the behaviour of the algorithm in practice. In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed. Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting the realistic average-case assumption that the underlying signal and measurement matrix are independent. We obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT (NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous results within this framework shows a substantial quantitative improvement. We also extend our analysis to a related algorithm which exploits the assumption that the underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010). We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery is guaranteed. Our results, which are the first in the phase transition framework for tree-based CS, show a further significant improvement over results for the standard sparsity model. We also propose a dynamic programming algorithm which is guaranteed to compute an exact tree projection in low-order polynomial time.
APA, Harvard, Vancouver, ISO, and other styles
35

Jaroń, Piotr, and Mateusz Kucharczyk. "Vision System Prototype for UAV Positioning and Sparse Obstacle Detection." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4663.

Full text
Abstract:
For the last few years computer vision due to its low exploitation cost and great capabilities has been experiencing rapid growth. One of the research fields that benefits from it the most is the aircrafts positioning and collision avoidance. Light cameras with low energy consumption are an ideal solution for UAVs (Unmanned Aerial Vehicles) navigation systems. With the new Swedish law – unique to Europe, that allows for civil usage of UAVs that fly on altitudes up to 120 meters, the need for reliable and cheap positioning systems became even more dire. In this thesis two possible solutions for positioning problem and one for collision avoidance were proposed and analyzed. Possibility of tracking the vehicles position both from ground and from air was exploited. Camera setup for successful positioning and collision avoidance systems was defined and preliminary results for of the systems performance were presented.
Vision systems are employed more and more often in navigation of ground and air robots. Their greatest advantages are: low cost compared to other sensors, ability to capture large portion of the environment very quickly on one image frame, and their light weight, which is a great advantage for air drone navigation systems. In the thesis the problem of UAV (Unmanned Aerial Vehicle) is considered. Two different issues are tackled. First is determining the vehicles position using one down-facing or two front-facing cameras, and the other is sparse obstacle detection. Additionally, in the thesis, the camera calibration process and camera set up for navigation is discussed. Error causes and types are analyzed.
APA, Harvard, Vancouver, ISO, and other styles
36

Shaban, Fahad. "Application of L1 reconstruction of sparse signals to ambiguity resolution in radar." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47637.

Full text
Abstract:
The objective of the proposed research is to develop a new algorithm for range and Doppler ambiguity resolution in radar detection data using L1 minimization methods for sparse signals and to investigate the properties of such techniques. This novel approach to ambiguity resolution makes use of the sparse measurement structure of the post-detection data in multiple pulse repetition frequency radars and the resulting equivalence of the computationally intractable L0 minimization and the surrogate L1 minimization methods. The ambiguity resolution problem is cast as a linear system of equations which is then solved for the unique sparse solution in the absence of errors. It is shown that the new technique successfully resolves range and Doppler ambiguities and the recovery is exact in the ideal case of no errors in the system. The behavior of the technique is then investigated in the presence of real world data errors encountered in radar measurement and detection process. Examples of such errors include blind zone effects, collisions, false alarms and missed detections. It is shown that the mathematical model consisting of a linear system of equations developed for the ideal case can be adjusted to account for data errors. Empirical results show that the L1 minimization approach also works well in the presence of errors with minor extensions to the algorithm. Several examples are presented to demonstrate the successful implementation of the new technique for range and Doppler ambiguity resolution in pulse Doppler radars.
APA, Harvard, Vancouver, ISO, and other styles
37

Hájek, Vojtěch. "Restaurace signálu s omezenou okamžitou hodnotou pro vícekanálový audio signál." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-401987.

Full text
Abstract:
This master’s thesis deals with the restoration of clipped multichannel audio signals based on sparse representations. First, a general theory of clipping and theory of sparse representations of audio signals is described. A short overview of existing restoration methods is part of this thesis as well. Subsequently, two declipping algorithms are introduced and are also implemented in the Matlab environment as a part of the thesis. The first one, SPADE, is considered a state- of-the-art method for mono audio signals declipping and the second one, CASCADE, which is derived from SPADE, is designed for the restoration of multichannel signals. In the last part of the thesis, both algorithms are tested and the results are compared using the objective measures SDR and PEAQ, and also using the subjective listening test MUSHRA.
APA, Harvard, Vancouver, ISO, and other styles
38

Axer, Steffen Verfasser], and Bernhard [Akademischer Betreuer] [Friedrich. "Estimating Traffic Signal States by Exploiting Sparse Low-Frequency Floating Car Data / Steffen Axer ; Betreuer: Bernhard Friedrich." Braunschweig : Technische Universität Braunschweig, 2017. http://d-nb.info/1175817023/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Axer, Steffen [Verfasser], and Bernhard [Akademischer Betreuer] Friedrich. "Estimating Traffic Signal States by Exploiting Sparse Low-Frequency Floating Car Data / Steffen Axer ; Betreuer: Bernhard Friedrich." Braunschweig : Technische Universität Braunschweig, 2017. http://nbn-resolving.de/urn:nbn:de:gbv:084-2017083013170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Balavoine, Aurele. "Mathematical analysis of a dynamical system for sparse recovery." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51882.

Full text
Abstract:
This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large dimension must be recovered from a small number of linear measurements, and can be accomplished by solving a complex optimization program. While many solvers have been proposed and analyzed to solve such programs in digital, their high complexity currently prevents their use in real-time applications. On the contrary, a continuous-time neural network implemented in analog VLSI could lead to significant gains in both time and power consumption. The contributions of this thesis are threefold. First, convergence results for neural networks that solve a large class of nonsmooth optimization programs are presented. These results extend previous analysis by allowing the interconnection matrix to be singular and the activation function to have many constant regions and grow unbounded. The exponential convergence rate of the networks is demonstrated and an analytic expression for the convergence speed is given. Second, these results are specialized to the L1-minimization problem, which is the most famous approach to solving the sparse recovery problem. The analysis relies on standard techniques in CS and proves that the network takes an efficient path toward the solution for parameters that match results obtained for digital solvers. Third, the convergence rate and accuracy of both the continuous-time system and its discrete-time equivalent are derived in the case where the underlying sparse signal is time-varying and the measurements are streaming. Such a study is of great interest for practical applications that need to operate in real-time, when the data are streaming at high rates or the computational resources are limited. As a conclusion, while existing analysis was concentrated on discrete-time algorithms for the recovery of static signals, this thesis provides convergence rate and accuracy results for the recovery of static signals using a continuous-time solver, and for the recovery of time-varying signals with both a discrete-time and a continuous-time solver.
APA, Harvard, Vancouver, ISO, and other styles
41

Lindahl, Fred. "Detection of Sparse and Weak Effects in High-Dimensional Supervised Learning Problems, Applied to Human Microbiome Data." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288503.

Full text
Abstract:
This project studies the signal detection and identification problem in high-dimensional noisy data and the possibility of using it on microbiome data. An extensive simulation study was performed on generated data using as well as a microbiome dataset collected on patients with Parkinson's disease, using Donoho and Jin's Higher criticism, Jager and Wellner's phi-divergence-based goodness-of-fit-test and Stepanova and Pavlenko's CsCsHM statistic . We present some novel approaches based on established theory that perform better than existing methods and show that it is possible to use the signal identification framework to detect differentially abundant features in microbiome data. Although the novel approaches produce good results, they lack substantial mathematical foundations and should be avoided if theoretical rigour is needed. We also conclude that while we have found that it is possible to use signal identification methods to find abundant features in microbiome data, further refinement is necessary before it can be properly used in research.
Detta projekt studerar signaldetekterings- och identifieringsproblemet i högdimensionell brusig data och möjligheten att använda det på mikrobiomdata från människor. En omfattande simuleringsstudie utfördes på genererad data samt ett mikrobiomdataset som samlats in på patienter med Parkinsons sjukdom, med hjälp av ett antal goodness-of-fit-metoder: Donoho och Jins Higher criticis , Jager och Wellners phi-divergenser och Stepanova och Pavelenkos CsCsHM. Vi presenterar några nya tillvägagångssätt baserade på vedertagen teori som visar sig fungera bättre än befintliga metoder och visar att det är möjligt att använda signalidentifiering för att upptäcka olika funktioner i mikrobiomdata. Även om de nya metoderna ger goda resultat saknar de betydande matematiska grunder och bör undvikas om teoretisk formalism är nödvändigt. Vi drar också slutsatsen att medan vi har funnit att det är möjligt att använda signalidentifieringsmetoder för att hitta information i mikrobiomdata, är ytterligare experiment nödvändiga innan de kan användas på ett korrekt sätt i forskning.
APA, Harvard, Vancouver, ISO, and other styles
42

Andersson, Viktor. "Semantic Segmentation : Using Convolutional Neural Networks and Sparse dictionaries." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139367.

Full text
Abstract:
The two main bottlenecks using deep neural networks are data dependency and training time. This thesis proposes a novel method for weight initialization of the convolutional layers in a convolutional neural network. This thesis introduces the usage of sparse dictionaries. A sparse dictionary optimized on domain specific data can be seen as a set of intelligent feature extracting filters. This thesis investigates the effect of using such filters as kernels in the convolutional layers in the neural network. How do they affect the training time and final performance? The dataset used here is the Cityscapes-dataset which is a library of 25000 labeled road scene images.The sparse dictionary was acquired using the K-SVD method. The filters were added to two different networks whose performance was tested individually. One of the architectures is much deeper than the other. The results have been presented for both networks. The results show that filter initialization is an important aspect which should be taken into consideration while training the deep networks for semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles
43

Šiška, Jakub. "Restaurace zvukových signálů poškozených kvantizací." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413249.

Full text
Abstract:
This master’s thesis deals with the restoration of audio signals damaged by quantization. The theoretical part starts with a description of quantization and dequantization in general, few existing methods of dequantization of audio signals and theory of sparse representations of signals are also presented. The next part introduces algorithms suitable for dequantization, specifically Douglas–Rachford, Chambolle–Pock, SPADEQ and implementation of these algorithms in MATLAB application in the next chapter. In the last part of this thesis, testing of reconstructed signals using the algorithms takes place and results are evaluated by objective measures SDR, PEMO-Q, PEAQ and subjective listening test MUSHRA.
APA, Harvard, Vancouver, ISO, and other styles
44

Barbier, Jean. "Statistical physics and approximate message-passing algorithms for sparse linear estimation problems in signal processing and coding theory." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC130.

Full text
Abstract:
Cette thèse s’intéresse à l’application de méthodes de physique statistique des systèmes désordonnés ainsi que de l’inférence à des problèmes issus du traitement du signal et de la théorie du codage, plus précisément, aux problèmes parcimonieux d’estimation linéaire. Les outils utilisés sont essentiellement les modèles graphiques et l’algorithme approximé de passage de messages ainsi que la méthode de la cavité (appelée analyse de l’évolution d’état dans le contexte du traitement de signal) pour son analyse théorique. Nous aurons également recours à la méthode des répliques de la physique des systèmes désordonnées qui permet d’associer aux problèmes rencontrés une fonction de coût appelé potentiel ou entropie libre en physique. Celle-ci permettra de prédire les différentes phases de complexité typique du problème, en fonction de paramètres externes tels que le niveau de bruit ou le nombre de mesures liées au signal auquel l’on a accès : l’inférence pourra être ainsi typiquement simple, possible mais difficile et enfin impossible. Nous verrons que la phase difficile correspond à un régime où coexistent la solution recherchée ainsi qu’une autre solution des équations de passage de messages. Dans cette phase, celle-ci est un état métastable et ne représente donc pas l’équilibre thermodynamique. Ce phénomène peut-être rapproché de la surfusion de l’eau, bloquée dans l’état liquide à une température où elle devrait être solide pour être à l’équilibre. Via cette compréhension du phénomène de blocage de l’algorithme, nous utiliserons une méthode permettant de franchir l’état métastable en imitant la stratégie adoptée par la nature pour la surfusion : la nucléation et le couplage spatial. Dans de l’eau en état métastable liquide, il suffit d’une légère perturbation localisée pour que se créer un noyau de cristal qui va rapidement se propager dans tout le système de proche en proche grâce aux couplages physiques entre atomes. Le même procédé sera utilisé pour aider l’algorithme à retrouver le signal, et ce grâce à l’introduction d’un noyau contenant de l’information locale sur le signal. Celui-ci se propagera ensuite via une "onde de reconstruction" similaire à la propagation de proche en proche du cristal dans l’eau. Après une introduction à l’inférence statistique et aux problèmes d’estimation linéaires, on introduira les outils nécessaires. Seront ensuite présentées des applications de ces notions. Celles-ci seront divisées en deux parties. La partie traitement du signal se concentre essentiellement sur le problème de l’acquisition comprimée où l’on cherche à inférer un signal parcimonieux dont on connaît un nombre restreint de projections linéaires qui peuvent être bruitées. Est étudiée en profondeur l’influence de l’utilisation d’opérateurs structurés à la place des matrices aléatoires utilisées originellement en acquisition comprimée. Ceux-ci permettent un gain substantiel en temps de traitement et en allocation de mémoire, conditions nécessaires pour le traitement algorithmique de très grands signaux. Nous verrons que l’utilisation combinée de tels opérateurs avec la méthode du couplage spatial permet d’obtenir un algorithme de reconstruction extrê- mement optimisé et s’approchant des performances optimales. Nous étudierons également le comportement de l’algorithme confronté à des signaux seulement approximativement parcimonieux, question fondamentale pour l’application concrète de l’acquisition comprimée sur des signaux physiques réels. Une application directe sera étudiée au travers de la reconstruction d’images mesurées par microscopie à fluorescence. La reconstruction d’images dites "naturelles" sera également étudiée. En théorie du codage, seront étudiées les performances du décodeur basé sur le passage de message pour deux modèles distincts de canaux continus. Nous étudierons un schéma où le signal inféré sera en fait le bruit que l’on pourra ainsi soustraire au signal reçu. Le second, les codes de superposition parcimonieuse pour le canal additif Gaussien est le premier exemple de schéma de codes correcteurs d’erreurs pouvant être directement interprété comme un problème d’acquisition comprimée structuré. Dans ce schéma, nous appliquerons une grande partie des techniques étudiée dans cette thèse pour finalement obtenir un décodeur ayant des résultats très prometteurs à des taux d’information transmise extrêmement proches de la limite théorique de transmission du canal
This thesis is interested in the application of statistical physics methods and inference to signal processing and coding theory, more precisely, to sparse linear estimation problems. The main tools are essentially the graphical models and the approximate message-passing algorithm together with the cavity method (referred as the state evolution analysis in the signal processing context) for its theoretical analysis. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical complexity of the problem as a function of external parameters such as the noise level or the number of measurements one has about the signal: the inference can be typically easy, hard or impossible. We will see that the hard phase corresponds to a regime of coexistence of the actual solution together with another unwanted solution of the message passing equations. In this phase, it represents a metastable state which is not the true equilibrium solution. This phenomenon can be linked to supercooled water blocked in the liquid state below its freezing critical temperature. Thanks to this understanding of blocking phenomenon of the algorithm, we will use a method that allows to overcome the metastability mimicing the strategy adopted by nature itself for supercooled water: the nucleation and spatial coupling. In supercooled water, a weak localized perturbation is enough to create a crystal nucleus that will propagate in all the medium thanks to the physical couplings between closeby atoms. The same process will help the algorithm to find the signal, thanks to the introduction of a nucleus containing local information about the signal. It will then spread as a "reconstruction wave" similar to the crystal in the water. After an introduction to statistical inference and sparse linear estimation, we will introduce the necessary tools. Then we will move to applications of these notions. They will be divided into two parts. The signal processing part will focus essentially on the compressed sensing problem where we seek to infer a sparse signal from a small number of linear projections of it that can be noisy. We will study in details the influence of structured operators instead of purely random ones used originally in compressed sensing. These allow a substantial gain in computational complexity and necessary memory allocation, which are necessary conditions in order to work with very large signals. We will see that the combined use of such operators with spatial coupling allows the implementation of an highly optimized algorithm able to reach near to optimal performances. We will also study the algorithm behavior for reconstruction of approximately sparse signals, a fundamental question for the application of compressed sensing to real life problems. A direct application will be studied via the reconstruction of images measured by fluorescence microscopy. The reconstruction of "natural" images will be considered as well. In coding theory, we will look at the message-passing decoding performances for two distincts real noisy channel models. A first scheme where the signal to infer will be the noise itself will be presented. The second one, the sparse superposition codes for the additive white Gaussian noise channel is the first example of error correction scheme directly interpreted as a structured compressed sensing problem. Here we will apply all the tools developed in this thesis for finally obtaining a very promising decoder that allows to decode at very high transmission rates, very close of the fundamental channel limit
APA, Harvard, Vancouver, ISO, and other styles
45

bi, xiaofei. "Compressed Sampling for High Frequency Receivers Applications." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10877.

Full text
Abstract:
In digital signal processing field, for recovering the signal without distortion, Shannon sampling theory must be fulfilled in the traditional signal sampling. However, in some practical applications, it is becoming an obstacle because of the dramatic increase of the costs due to increased volume of the storage and transmission as a function of frequency for sampling. Therefore, how to reduce the number of the sampling in analog to digital conversion (ADC) for wideband and how to compress the large data effectively has been becoming major subject for study. Recently, a novel technique, so-called “compressed sampling”, abbreviated as CS, has been proposed to solve the problem. This method will capture and represent compressible signals at a sampling rate significantly lower than the Nyquist rate.   This paper not only surveys the theory of compressed sampling, but also simulates the CS with the software Matlab. The error between the recovered signal and original signal for simulation is around -200dB. The attempts were made to apply CS. The error between the recovered signal and original one for experiment is around -40 dB which means the CS is realized in a certain extent. Furthermore, some related applications and the suggestions of the further work are discussed.
APA, Harvard, Vancouver, ISO, and other styles
46

Srinivasa, Christopher. "Graph Theory for the Discovery of Non-Parametric Audio Objects." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20126.

Full text
Abstract:
A novel framework based on cluster co-occurrence and graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. These new objects differ from those found in current object coding schemes as their shape is not restricted by any a priori psychoacoustic knowledge. The framework is novel from an application perspective, as it marks the first time that graph theory is applied to audio, and with regards to theoretical developments, as it involves new extensions to the areas of unsupervised learning algorithms and frequent subgraph mining methods. Tests are performed using a corpus of audio files spanning a wide range of sounds. Results show that the framework discovers new types of audio objects which yield average respective overall and relative compression gains of 15.90% and 23.53% while maintaining a very good average audio quality with imperceptible changes.
APA, Harvard, Vancouver, ISO, and other styles
47

Wahl, Joel. "Image inpainting using sparse reconstruction methods with applications to the processing of dislocations in digital holography." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63984.

Full text
Abstract:
This report is a master thesis, written by an engineering physics and electrical engineering student at Luleå University of Technology.The desires of this project was to remove dislocations from wrapped phase maps using sparse reconstructive methods. Dislocations is an error that can appear in phase maps due to improper filtering or inadequate sampling. Dislocations makes it impossible to correctly unwrap the phasemap.The report contains a mathematical description of a sparse reconstructive method. The sparse reconstructive method is based on KSVDbox which was created by R. Rubinstein and is free for download and use. The KSVDbox is a MATLAB implementation of a dictionary learning algorithm called K-SVD with Orthogonal Matching Pursuit and a sparse reconstructive algorithm. A guide for adapting the toolbox for inpainting is included, with a couple of examples on natural images which supports the suggested adaptation. For experimental purposes a set of simulated wrapped phase maps with and without disloca-tions were created. These simulated phase maps are based on work by P. Picart. The MATLAB implementation that was used to generate these test images can be found in the appendix of this report such that they can easily be generated by anyone who has the interest to do so. Finally the report leads to an outline of five different experiments that was designed to test the KSVDbox for the processing of dislocations. Each one of these experiments uses a different dictionary. These experiments are due to inpainting with, 1. A dictionary based on Discrete Cosine Transform. 2. An adaptive dictionary, where the dictionary learning algorithm has been shown what thearea in the phase map that was damaged by dislocations should look like. 3. An adaptive dictionary, where the dictionary learning algorithm has been allowed to trainon the phase map that with damages. This is done such that areas with dislocations areignored. 4. An adaptive dictionary, where training is done on a separate image that has been designedto contain general phase patterns. 5. An adaptive dictionary, that results from concatenating the dictionaries used in experiment 3 and 4. The first three experiments are complimented with experiments done on a natural image for comparison purposes.The results show that sparse reconstructive methods, when using the scheme used in this work, is unsuitable for processing of dislocations in phase maps. This is most likely because the reconstructive method has difficulties in acquiring a high contrast reconstruction and there is nothing in the algorithm that causes the inpainting from any direction to match with the inpainting from other directions.
APA, Harvard, Vancouver, ISO, and other styles
48

Chan, wai tim Stefen. "Apprentissage supervisé d’une représentation multi-couches à base de dictionnaires pour la classification d’images et de vidéos." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT089/document.

Full text
Abstract:
Ces dernières années, de nombreux travaux ont été publiés sur l'encodage parcimonieux et l'apprentissage de dictionnaires. Leur utilisation s'est initialement développée dans des applications de reconstruction et de restauration d'images. Plus récemment, des recherches ont été réalisées sur l'utilisation des dictionnaires pour des tâches de classification en raison de la capacité de ces méthodes à chercher des motifs sous-jacents dans les images et de bons résultats ont été obtenus dans certaines conditions : objet d'intérêt centré, de même taille, même point de vue. Cependant, hors de ce cadre restrictif, les résultats sont plus mitigés. Dans cette thèse, nous nous intéressons à la recherche de dictionnaires adaptés à la classification. Les méthodes d'apprentissage classiquement utilisées pour les dictionnaires s'appuient sur des algorithmes d'apprentissage non supervisé. Nous allons étudier ici un moyen d'effectuer l'apprentissage de dictionnaires de manière supervisée. Dans l'objectif de pousser encore plus loin le caractère discriminant des codes obtenus par les dictionnaires proposés, nous introduisons également une architecture multicouche de dictionnaires. L'architecture proposée s'appuie sur la description locale d'une image en entrée et sa transformation grâce à une succession d'encodage et de traitements, et fournit en sortie un ensemble de descripteurs adaptés à la classification. La méthode d'apprentissage que nous avons développé est basée sur l'algorithme de rétro-propagation du gradient permettant un apprentissage coordonné des différents dictionnaires et une optimisation uniquement par rapport à un coût de classification. L’architecture proposée a été testée sur les bases de données d’images MNIST, CIFAR-10 et STL-10 avec de bons résultats par rapport aux autres méthodes basées sur l’utilisation de dictionnaires. La structure proposée peut être étendue à l’analyse de vidéos
In the recent years, numerous works have been published on dictionary learning and sparse coding. They were initially used in image reconstruction and image restoration tasks. Recently, researches were interested in the use of dictionaries for classification tasks because of their capability to represent underlying patterns in images. Good results have been obtained in specific conditions: centered objects of interest, homogeneous sizes and points of view.However, without these constraints, the performances are dropping.In this thesis, we are interested in finding good dictionaries for classification.The learning methods classically used for dictionaries rely on unsupervised learning. Here, we are going to study how to perform supervised dictionary learning.In order to push the performances further, we introduce a multilayer architecture for dictionaries. The proposed architecture is based on the local description of an input image and its transformation thanks to a succession of encoding and processing steps. It outputs a vector of features effective for classification.The learning method we developed is based on the backpropagation algorithm which allows a joint learning of the different dictionaries and an optimization solely with respect to the classification cost.The proposed architecture has been tested on MNIST, CIFAR-10 and STL-10 datasets with good results compared to other dicitonary-based methods. The proposed architecture can be extended to video analysis
APA, Harvard, Vancouver, ISO, and other styles
49

Su, Hai. "Nuclei/Cell Detection in Microscopic Skeletal Muscle Fiber Images and Histopathological Brain Tumor Images Using Sparse Optimizations." UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/24.

Full text
Abstract:
Nuclei/Cell detection is usually a prerequisite procedure in many computer-aided biomedical image analysis tasks. In this thesis we propose two automatic nuclei/cell detection frameworks. One is for nuclei detection in skeletal muscle fiber images and the other is for brain tumor histopathological images. For skeletal muscle fiber images, the major challenges include: i) shape and size variations of the nuclei, ii) overlapping nuclear clumps, and iii) a series of z-stack images with out-of-focus regions. We propose a novel automatic detection algorithm consisting of the following components: 1) The original z-stack images are first converted into one all-in-focus image. 2) A sufficient number of hypothetical ellipses are then generated for each nuclei contour. 3) Next, a set of representative training samples and discriminative features are selected by a two-stage sparse model. 4) A classifier is trained using the refined training data. 5) Final nuclei detection is obtained by mean-shift clustering based on inner distance. The proposed method was tested on a set of images containing over 1500 nuclei. The results outperform the current state-of-the-art approaches. For brain tumor histopathological images, the major challenges are to handle significant variations in cell appearance and to split touching cells. The proposed novel automatic cell detection consists of: 1) Sparse reconstruction for splitting touching cells. 2) Adaptive dictionary learning for handling cell appearance variations. The proposed method was extensively tested on a data set with over 2000 cells. The result outperforms other state-of-the-art algorithms with F1 score = 0.96.
APA, Harvard, Vancouver, ISO, and other styles
50

Benaddi, Tarik. "Sparse graph-based coding schemes for continuous phase modulations." Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/16037/1/Benaddi_Tarik.pdf.

Full text
Abstract:
The use of the continuous phase modulation (CPM) is interesting when the channel represents a strong non-linearity and in the case of limited spectral support; particularly for the uplink, where the satellite holds an amplifier per carrier, and for downlinks where the terminal equipment works very close to the saturation region. Numerous studies have been conducted on this issue but the proposed solutions use iterative CPM demodulation/decoding concatenated with convolutional or block error correcting codes. The use of LDPC codes has not yet been introduced. Particularly, no works, to our knowledge, have been done on the optimization of sparse graph-based codes adapted for the context described here. In this study, we propose to perform the asymptotic analysis and the design of turbo-CPM systems based on the optimization of sparse graph-based codes. Moreover, an analysis on the corresponding receiver will be done.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography