To see the other types of publications on this topic, follow the link: Structured Sparse Signal Estimation.

Dissertations / Theses on the topic 'Structured Sparse Signal Estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Structured Sparse Signal Estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Meriaux, Bruno. "Contributions aux traitements robustes pour les systèmes multi-capteurs." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG009.

Full text
Abstract:
L'un des objectifs du traitement statistique du signal est l'extraction d'informations utiles à partir d'un ensemble de données et d'un modèle statistique. Par exemple, les méthodes pour détecter/localiser des cibles en radar requièrent généralement l'estimation de la matrice de covariance des données. Avec l'apparition de systèmes haute-résolution, l'utilisation d'un modèle gaussien n'est plus adaptée et conduit alors à des dégradations de performance. Par ailleurs, certaines informations a priori peuvent être obtenues par une étude préalable du système comme par exemple la structure de la matrice de covariance. Leur prise en compte améliore alors les performances des méthodes de traitement.Dans un premier temps, nous introduisons de nouveaux estimateurs robustes structurés de la matrice de covariance, basés sur la famille des distributions elliptiques et la classe des M-estimateurs. Nous analysons les performances asymptotiques de ces derniers et conduisons une analyse de sensibilité en considérant la possibilité d'erreurs sur le modèle statistique. Dans un second temps, nous proposons une reformulation du problème de détection de cibles à l'aide des techniques de subspace clustering et de reconstruction parcimonieuse. Nous étudions alors certaines propriétés théoriques du problème d'optimisation puis nous appliquons cette méthodologie dans un scénario de détection de cibles en présence de brouilleurs
One of the objectives of statistical signal processing is the extraction of useful information from a set of data and a statistical model. For example, most of the methods for detecting/localizing targets in radar generally require the estimation of the covariance matrix. With the emergence of high-resolution systems, the use of a Gaussian model is no longer suited and therefore leads to performance degradations. In addition, prior information can be obtained by a prior study of the system, such as the structure of the covariance matrix. Taking them into account then improves the performance of the processing methods. First, we introduce new robust structured estimators of the covariance matrix, based on the family of elliptical distributions and the class of M-estimators. We analyze the asymptotic performances of the latter and we conduct a sensitivity analysis by considering the possibility of mismatches on the statistical model.Secondly, we propose a reformulation of the target detection problem using sparse subspace clustering techniques. We then study some theoretical properties of the optimization problem and we apply this methodology in a scenario of target detection in presence of jammers
APA, Harvard, Vancouver, ISO, and other styles
2

Zachariah, Dave. "Estimation for Sensor Fusion and Sparse Signal Processing." Doctoral thesis, KTH, Signalbehandling, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121283.

Full text
Abstract:
Progressive developments in computing and sensor technologies during the past decades have enabled the formulation of increasingly advanced problems in statistical inference and signal processing. The thesis is concerned with statistical estimation methods, and is divided into three parts with focus on two different areas: sensor fusion and sparse signal processing. The first part introduces the well-established Bayesian, Fisherian and least-squares estimation frameworks, and derives new estimators. Specifically, the Bayesian framework is applied in two different classes of estimation problems: scenarios in which (i) the signal covariances themselves are subject to uncertainties, and (ii) distance bounds are used as side information. Applications include localization, tracking and channel estimation. The second part is concerned with the extraction of useful information from multiple sensors by exploiting their joint properties. Two sensor configurations are considered here: (i) a monocular camera and an inertial measurement unit, and (ii) an array of passive receivers. New estimators are developed with applications that include inertial navigation, source localization and multiple waveform estimation. The third part is concerned with signals that have sparse representations. Two problems are considered: (i) spectral estimation of signals with power concentrated to a small number of frequencies,and (ii) estimation of sparse signals that are observed by few samples, including scenarios in which they are linearly underdetermined. New estimators are developed with applications that include spectral analysis, magnetic resonance imaging and array processing.

QC 20130426

APA, Harvard, Vancouver, ISO, and other styles
3

Koep, Niklas [Verfasser], Rudolf [Akademischer Betreuer] Mathar, and Holger [Akademischer Betreuer] Rauhut. "Quantized compressive sampling for structured signal estimation / Niklas Koep ; Rudolf Mathar, Holger Rauhut." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1195446799/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Farouj, Younes. "Structured anisotropic sparsity priors for non-parametric function estimation." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI123/document.

Full text
Abstract:
Le problème d'estimer une fonction de plusieurs variables à partir d'une observation corrompue surgit dans de nombreux domaines d'ingénierie. Par exemple, en imagerie médicale cette tâche a attiré une attention particulière et a, même, motivé l'introduction de nouveaux concepts qui ont trouvé des applications dans de nombreux autres domaines. Cet intérêt est principalement du au fait que l'analyse des données médicales est souvent effectuée dans des conditions difficiles car on doit faire face au bruit, au faible contraste et aux transformations indésirables inhérents aux systèmes d'acquisition. D'autre part , le concept de parcimonie a eu un fort impact sur la reconstruction et la restauration d'images au cours des deux dernières décennies. La parcimonie stipule que certains signaux et images ont des représentations impliquant seulement quelques coefficients non nuls. Cela est avéré être vérifiable dans de nombreux problèmes pratiques. La thèse introduit de nouvelles constructions d'a priori de parcimonie dans le cas des ondelettes et de la variation totale. Ces constructions utilisent une notion d'anisotopie généralisée qui permet de regrouper des variables ayant des comportements similaires : ces comportement peuvent peut être liée à la régularité de la fonction, au sens physique des variables ou bien au modèle d'observation. Nous utilisons ces constructions pour l'estimation non-paramétriques de fonctions. Dans le cas des ondelettes, nous montrons l'optimalité de l'approche sur les espaces fonctionnelles habituels avant de présenter quelques exemples d’applications en débruitage de séquences d'images, de données spectrales et hyper-spectrales, écoulements incompressibles ou encore des images ultrasonores. En suite, nous modélisons un problème déconvolution de données d'imagerie par résonance magnétique fonctionnelle comme un problème de minimisation faisant apparaître un a priori de variation totale structuré en espace-temps. Nous adaptons une généralisation de l'éclatement explicite-implicite pour trouver une solution au problème de minimisation
The problem of estimating a multivariate function from corrupted observations arises throughout many areas of engineering. For instance, in the particular field of medical signal and image processing, this task has attracted special attention and even triggered new concepts and notions that have found applications in many other fields. This interest is mainly due to the fact that the medical data analysis pipeline is often carried out in challenging conditions, since one has to deal with noise, low contrast and undesirable transformations operated by acquisition systems. On the other hand, the concept of sparsity had a tremendous impact on data reconstruction and restoration in the last two decades. Sparsity stipulates that some signals and images have representations involving only a few non-zero coefficients. The present PhD dissertation introduces new constructions of sparsity priors for wavelets and total variation. These construction harness notions of generalized anisotropy that enables grouping variables into sub-sets having similar behaviour; this behaviour can be related to the regularity of the unknown function, the physical meaning of the variables or the observation model. We use these constructions for non-parametric estimation of multivariate functions. In the case of wavelet thresholding, we show the optimality of the procedure over usual functional spaces before presenting some applications on denoising of image sequence, spectral and hyperspectral data, incompressible flows and ultrasound images. Afterwards, we study the problem of retrieving activity patterns from functional Magnetic Resonance Imaging data without incorporating priors on the timing, durations and atlas-based spatial structure of the activation. We model this challenge as a spatio-temporal deconvolution problem. We propose the corresponding variational formulation and we adapt the generalized forward-backward splitting algorithm to solve it
APA, Harvard, Vancouver, ISO, and other styles
5

Barbier, Jean. "Statistical physics and approximate message-passing algorithms for sparse linear estimation problems in signal processing and coding theory." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC130.

Full text
Abstract:
Cette thèse s’intéresse à l’application de méthodes de physique statistique des systèmes désordonnés ainsi que de l’inférence à des problèmes issus du traitement du signal et de la théorie du codage, plus précisément, aux problèmes parcimonieux d’estimation linéaire. Les outils utilisés sont essentiellement les modèles graphiques et l’algorithme approximé de passage de messages ainsi que la méthode de la cavité (appelée analyse de l’évolution d’état dans le contexte du traitement de signal) pour son analyse théorique. Nous aurons également recours à la méthode des répliques de la physique des systèmes désordonnées qui permet d’associer aux problèmes rencontrés une fonction de coût appelé potentiel ou entropie libre en physique. Celle-ci permettra de prédire les différentes phases de complexité typique du problème, en fonction de paramètres externes tels que le niveau de bruit ou le nombre de mesures liées au signal auquel l’on a accès : l’inférence pourra être ainsi typiquement simple, possible mais difficile et enfin impossible. Nous verrons que la phase difficile correspond à un régime où coexistent la solution recherchée ainsi qu’une autre solution des équations de passage de messages. Dans cette phase, celle-ci est un état métastable et ne représente donc pas l’équilibre thermodynamique. Ce phénomène peut-être rapproché de la surfusion de l’eau, bloquée dans l’état liquide à une température où elle devrait être solide pour être à l’équilibre. Via cette compréhension du phénomène de blocage de l’algorithme, nous utiliserons une méthode permettant de franchir l’état métastable en imitant la stratégie adoptée par la nature pour la surfusion : la nucléation et le couplage spatial. Dans de l’eau en état métastable liquide, il suffit d’une légère perturbation localisée pour que se créer un noyau de cristal qui va rapidement se propager dans tout le système de proche en proche grâce aux couplages physiques entre atomes. Le même procédé sera utilisé pour aider l’algorithme à retrouver le signal, et ce grâce à l’introduction d’un noyau contenant de l’information locale sur le signal. Celui-ci se propagera ensuite via une "onde de reconstruction" similaire à la propagation de proche en proche du cristal dans l’eau. Après une introduction à l’inférence statistique et aux problèmes d’estimation linéaires, on introduira les outils nécessaires. Seront ensuite présentées des applications de ces notions. Celles-ci seront divisées en deux parties. La partie traitement du signal se concentre essentiellement sur le problème de l’acquisition comprimée où l’on cherche à inférer un signal parcimonieux dont on connaît un nombre restreint de projections linéaires qui peuvent être bruitées. Est étudiée en profondeur l’influence de l’utilisation d’opérateurs structurés à la place des matrices aléatoires utilisées originellement en acquisition comprimée. Ceux-ci permettent un gain substantiel en temps de traitement et en allocation de mémoire, conditions nécessaires pour le traitement algorithmique de très grands signaux. Nous verrons que l’utilisation combinée de tels opérateurs avec la méthode du couplage spatial permet d’obtenir un algorithme de reconstruction extrê- mement optimisé et s’approchant des performances optimales. Nous étudierons également le comportement de l’algorithme confronté à des signaux seulement approximativement parcimonieux, question fondamentale pour l’application concrète de l’acquisition comprimée sur des signaux physiques réels. Une application directe sera étudiée au travers de la reconstruction d’images mesurées par microscopie à fluorescence. La reconstruction d’images dites "naturelles" sera également étudiée. En théorie du codage, seront étudiées les performances du décodeur basé sur le passage de message pour deux modèles distincts de canaux continus. Nous étudierons un schéma où le signal inféré sera en fait le bruit que l’on pourra ainsi soustraire au signal reçu. Le second, les codes de superposition parcimonieuse pour le canal additif Gaussien est le premier exemple de schéma de codes correcteurs d’erreurs pouvant être directement interprété comme un problème d’acquisition comprimée structuré. Dans ce schéma, nous appliquerons une grande partie des techniques étudiée dans cette thèse pour finalement obtenir un décodeur ayant des résultats très prometteurs à des taux d’information transmise extrêmement proches de la limite théorique de transmission du canal
This thesis is interested in the application of statistical physics methods and inference to signal processing and coding theory, more precisely, to sparse linear estimation problems. The main tools are essentially the graphical models and the approximate message-passing algorithm together with the cavity method (referred as the state evolution analysis in the signal processing context) for its theoretical analysis. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical complexity of the problem as a function of external parameters such as the noise level or the number of measurements one has about the signal: the inference can be typically easy, hard or impossible. We will see that the hard phase corresponds to a regime of coexistence of the actual solution together with another unwanted solution of the message passing equations. In this phase, it represents a metastable state which is not the true equilibrium solution. This phenomenon can be linked to supercooled water blocked in the liquid state below its freezing critical temperature. Thanks to this understanding of blocking phenomenon of the algorithm, we will use a method that allows to overcome the metastability mimicing the strategy adopted by nature itself for supercooled water: the nucleation and spatial coupling. In supercooled water, a weak localized perturbation is enough to create a crystal nucleus that will propagate in all the medium thanks to the physical couplings between closeby atoms. The same process will help the algorithm to find the signal, thanks to the introduction of a nucleus containing local information about the signal. It will then spread as a "reconstruction wave" similar to the crystal in the water. After an introduction to statistical inference and sparse linear estimation, we will introduce the necessary tools. Then we will move to applications of these notions. They will be divided into two parts. The signal processing part will focus essentially on the compressed sensing problem where we seek to infer a sparse signal from a small number of linear projections of it that can be noisy. We will study in details the influence of structured operators instead of purely random ones used originally in compressed sensing. These allow a substantial gain in computational complexity and necessary memory allocation, which are necessary conditions in order to work with very large signals. We will see that the combined use of such operators with spatial coupling allows the implementation of an highly optimized algorithm able to reach near to optimal performances. We will also study the algorithm behavior for reconstruction of approximately sparse signals, a fundamental question for the application of compressed sensing to real life problems. A direct application will be studied via the reconstruction of images measured by fluorescence microscopy. The reconstruction of "natural" images will be considered as well. In coding theory, we will look at the message-passing decoding performances for two distincts real noisy channel models. A first scheme where the signal to infer will be the noise itself will be presented. The second one, the sparse superposition codes for the additive white Gaussian noise channel is the first example of error correction scheme directly interpreted as a structured compressed sensing problem. Here we will apply all the tools developed in this thesis for finally obtaining a very promising decoder that allows to decode at very high transmission rates, very close of the fundamental channel limit
APA, Harvard, Vancouver, ISO, and other styles
6

Cho, Myung. "Convex and non-convex optimizations for recovering structured data: algorithms and analysis." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5922.

Full text
Abstract:
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
APA, Harvard, Vancouver, ISO, and other styles
7

Lasserre, Marie. "Estimation non-ambigüe de cibles grâce à une représentation parcimonieuse Bayésienne d'un signal radar large bande." Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0028/document.

Full text
Abstract:
Les travaux menés lors de cette thèse s’inscrivent dans le cadre général de la détection de cibles en utilisant une forme d’onde non-conventionnelle large bande. L’utilisation d’une forme d’onde large bande à faible PRF a été proposée par le passé une alternative aux traitements multi-PRF qui limitent le temps d’illumination de la scène. En effet, l’augmentation de la bande instantanée permet d’obtenir une meilleure résolution distance ; les cibles rapides sont alors susceptibles de migrer lors du temps de traitement, mais ce phénomène de couplage distance-vitesse peut être mis à profit pour lever les ambiguïtés. L’objectif de la thèse est alors de développer, pour une forme d’onde large bande avec faible PRF, des traitements prenant en compte la migration des cibles et capables de lever les ambiguïtés vitesse dans des scénarios réalistes. Les travaux se basent sur un algorithme de représentation parcimonieuse non-ambigüe de cibles migrantes, dans un cadre algorithmique Bayésien. Cet algorithme est en revanche développé sous certaines hypothèses, et des travaux de robustification sont alors entrepris afin de l’utiliser sur des scénarios plus réalistes. Dans un premier temps, l’algorithme est robustifié au désalignement des cibles par rapport à la grille d’analyse, puis modifié pour prendre également en compte une possible composante diffuse de bruit. Il est également remanié pour estimer correctement une scène comportant une forte diversité de puissance, où des cibles fortes masquent potentiellement des cibles faibles. Les différents algorithmes sont validés à la fois sur des données synthétiques et expérimentales
The work conducted during this PhD falls within the general context of radar target detection using a non-conventional wideband waveform. More precisely, the use of a low-PRF wideband waveform has been proposed in the past as an alternative to the classical staggered-PRF processing used to mitigate velocity ambiguities that limits dwell time. Increasing the instantaneous bandwidth improves range resolution; fast moving targets are then likely to migrate during the coherent processing interval. This range-velocity coupling can then be used to mitigate velocity ambiguities. This PhD thesis aims at developing an algorithm able to provide unambiguous estimation of migrating targets using a low-PRF wideband waveform. It is based on a sparse representation algorithm able to unambiguously estimate migrating targets, within a Bayesian framework. However, this algorithm is developed under some hypothesis, and then requires robustification to be used on more realistic scenarii. First, the algorithm is robustified to the case of off-grid targets, and then upgraded to take into account a possible diffuse clutter component. On the other hand, the reference algorithm is modified to accurately estimate high dynamic range scenes where weak targets compete with strong targets. All the developed algorithms have been validated on synthetic and experimental data recorded by the PARSAX radar from the Technical University of Delft, The Netherlands
APA, Harvard, Vancouver, ISO, and other styles
8

Wirfält, Petter. "Exploiting Prior Information in Parametric Estimation Problems for Multi-Channel Signal Processing Applications." Doctoral thesis, KTH, Signalbehandling, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134034.

Full text
Abstract:
This thesis addresses a number of problems all related to parameter estimation in sensor array processing. The unifying theme is that some of these parameters are known before the measurements are acquired. We thus study how to improve the estimation of the unknown parameters by incorporating the knowledge of the known parameters; exploiting this knowledge successfully has the potential to dramatically improve the accuracy of the estimates. For covariance matrix estimation, we exploit that the true covariance matrix is Kronecker and Toeplitz structured. We then devise a method to ascertain that the estimates possess this structure. Additionally, we can show that our proposed estimator has better performance than the state-of-art when the number of samples is low, and that it is also efficient in the sense that the estimates have Cram\'er-Rao lower Bound (CRB) equivalent variance. In the direction of arrival (DOA) scenario, there are different types of prior information; first, we study the case when the location of some of the emitters in the scene is known. We then turn to cases with additional prior information, i.e.~when it is known that some (or all) of the source signals are uncorrelated. As it turns out, knowledge of some DOA combined with this latter form of prior knowledge is especially beneficial, giving estimators that are dramatically more accurate than the state-of-art. We also derive the corresponding CRBs, and show that under quite mild assumptions, the estimators are efficient. Finally, we also investigate the frequency estimation scenario, where the data is a one-dimensional temporal sequence which we model as a spatial multi-sensor response. The line-frequency estimation problem is studied when some of the frequencies are known; through experimental data we show that our approach can be beneficial. The second frequency estimation paper explores the analysis of pulse spin-locking data sequences, which are encountered in nuclear resonance experiments. By introducing a novel modeling technique for such data, we develop a method for estimating the interesting parameters of the model. The technique is significantly faster than previously available methods, and provides accurate estimation results.
Denna doktorsavhandling behandlar parameterestimeringsproblem inom flerkanals-signalbehandling. Den gemensamma förutsättningen för dessa problem är att det finns information om de sökta parametrarna redan innan data analyseras; tanken är att på ett så finurligt sätt som möjligt använda denna kunskap för att förbättra skattningarna av de okända parametrarna. I en uppsats studeras kovariansmatrisskattning när det är känt att den sanna kovariansmatrisen har Kronecker- och Toeplitz-struktur. Baserat på denna kunskap utvecklar vi en metod som säkerställer att även skattningarna har denna struktur, och vi kan visa att den föreslagna skattaren har bättre prestanda än existerande metoder. Vi kan också visa att skattarens varians når Cram\'er-Rao-gränsen (CRB). Vi studerar vidare olika sorters förhandskunskap i riktningsbestämningsscenariot: först i det fall då riktningarna till ett antal av sändarna är kända. Sedan undersöker vi fallet då vi även vet något om kovariansen mellan de mottagna signalerna, nämligen att vissa (eller alla) signaler är okorrelerade. Det visar sig att just kombinationen av förkunskap om både korrelation och riktning är speciellt betydelsefull, och genom att utnyttja denna kunskap på rätt sätt kan vi skapa skattare som är mycket noggrannare än tidigare möjligt. Vi härleder även CRB för fall med denna förhandskunskap, och vi kan visa att de föreslagna skattarna är effektiva. Slutligen behandlar vi även frekvensskattning. I detta problem är data en en-dimensionell temporal sekvens som vi modellerar som en spatiell fler-kanalssignal. Fördelen med denna modelleringsstrategi är att vi kan använda liknande metoder i estimatorerna som vid sensor-signalbehandlingsproblemen. Vi utnyttjar återigen förhandskunskap om källsignalerna: i ett av bidragen är antagandet att vissa frekvenser är kända, och vi modifierar en existerande metod för att ta hänsyn till denna kunskap. Genom att tillämpa den föreslagna metoden på experimentell data visar vi metodens användbarhet. Det andra bidraget inom detta område studerar data som erhålls från exempelvis experiment inom kärnmagnetisk resonans. Vi introducerar en ny modelleringsmetod för sådan data och utvecklar en algoritm för att skatta de önskade parametrarna i denna modell. Vår algoritm är betydligt snabbare än existerande metoder, och skattningarna är tillräckligt noggranna för typiska tillämpningar.

QC 20131115

APA, Harvard, Vancouver, ISO, and other styles
9

Bousabaa, Sofiane. "Acoustic Green’s Function Estimation using Numerical Simulations and Application to Extern Aeroacoustic Beamforming." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS228.

Full text
Abstract:
Les techniques d’identification acoustique visent à caractériser les différentes sources de bruit sur un avion à partir de données microphoniques. Ces techniques nécessitent la connaissance de la fonction de Green acoustique du milieu. Or celle-ci n’est connue analytiquement que pour des configurations simples et l’utilisation de fonctions de Green imparfaites conduit à une erreur sur l’identification des sources. L’objectif de cette thèse est de mettre au point une méthode numérique d’estimation des fonctions de Green pour l’imagerie aéroacoustique. La méthode doit avoir un coût de calcul minimal et fournir une estimation suffisamment précises pour être utilisée dans des configurations réalistes. Pour cela, la parcimonie de la fonction de Green dans le domaine temporel est prise en compte. Il en découle un problème d’identification de système nécessitant l’utilisation d’algorithmes de régression linéaire. La méthode est d’abord validée sur des cas numériques 3D représentatifs de ceux rencontrés dans l’industrie. Lorsque le nombre de points de focalisation est élevé, la réciprocité en écoulement retourné simplifie considérablement le problème. La méthode est ensuite appliquée sur des données d’essais réalisés sur une aile à haute portance passée en soufflerie anéchoïque à veine ouverte justifiant de l’applicabilité de la méthode sur des configurations industrielles réalistes
Acoustic imaging techniques aims at characterizing the different acoustic sources of noise on an aircraft using microphone array measurements. Those techniques require the knowledge of the acoustic Green’s function of the medium. Unfortunately, this function is known only for cases of relatively simple complexity and the use of approximate Green’s function can lead to errors in the identification of the sources. The main aim of this thesis is to set up a numerical method for the estimation of the Green’s function for aeroacoustic imaging applications. The method must have a minimal computational cost and provide a sufficiently accurate estimation to be used on realistic industrial configurations. The proposed methodology takes advantage of the sparsity of the Green’s functions in the time-domain. This results in a system identification problem and sparsity-based regression algorithms can be used to solve it. First, the method is validated on complex 3D numerical test cases typical of those encountered in the industry. For configurations involving a high number of focus points, the reverse-flow reciprocity simplifies significantly the Green’s function estimation problem. The methodology is finally applied on high lift 2D wing data placed in the ONERA CEPRA19 open section anechoic wind tunnel justifying the applicability of the method on realistic industrial configurations
APA, Harvard, Vancouver, ISO, and other styles
10

Raguet, Hugo. "A Signal Processing Approach to Voltage-Sensitive Dye Optical Imaging." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090031/document.

Full text
Abstract:
L’imagerie optique par colorant potentiométrique est une méthode d’enregistrement de l’activité corticale prometteuse, mais dont le potentiel réel est limité par la présence d’artefacts et d’interférences dans les acquisitions. À partir de modèles existant dans la littérature, nous proposons un modèle génératif du signal basé sur un mélange additif de composantes, chacune contrainte dans une union d’espaces linéaires déterminés par son origine biophysique. Motivés par le problème de séparation de composantes qui en découle, qui est un problème inverse linéaire sous-déterminé, nous développons : (1) des régularisations convexes structurées spatialement, favorisant en particulier des solutions parcimonieuses ; (2) un nouvel algorithme proximal de premier ordre pour minimiser efficacement la fonctionnelle qui en résulte ; (3) des méthodes statistiques de sélection de paramètre basées sur l’estimateur non biaisé du risque de Stein. Nous étudions ces outils dans un cadre général, et discutons leur utilité pour de nombreux domaines des mathématiques appliqués, en particulier pour les problèmes inverses ou de régression en grande dimension. Nous développons par la suite un logiciel de séparation de composantes en présence de bruit, dans un environnement intégré adapté à l’imagerie optique par colorant potentiométrique. Finalement, nous évaluons ce logiciel sur différentes données, synthétiques et réelles, montrant des résultats encourageants quant à la possibilité d’observer des dynamiques corticales complexes
Voltage-sensitive dye optical imaging is a promising recording modality for the cortical activity, but its practical potential is limited by many artefacts and interferences in the acquisitions. Inspired by existing models in the literature, we propose a generative model of the signal, based on an additive mixtures of components, each one being constrained within an union of linear spaces, determined by its biophysical origin. Motivated by the resulting component separation problem, which is an underdetermined linear inverse problem, we develop: (1) convex, spatially structured regularizations, enforcing in particular sparsity on the solutions; (2) a new rst-order proximal algorithm for minimizing e›ciently the resulting functional; (3) statistical methods for automatic parameters selection, based on Stein’s unbiased risk estimate.We study thosemethods in a general framework, and discuss their potential applications in variouselds of applied mathematics, in particular for large scale inverse problems or regressions. We develop subsequently a soŸware for noisy component separation, in an integrated environment adapted to voltage-sensitive dye optical imaging. Finally, we evaluate this soŸware on dišerent data set, including synthetic and real data, showing encouraging perspectives for the observation of complex cortical dynamics
APA, Harvard, Vancouver, ISO, and other styles
11

Korats, Gundars. "Estimation de sources corticales : du montage laplacian aux solutions parcimonieuses." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0027/document.

Full text
Abstract:
L’imagerie de source corticale joue un rôle important pour la compréhension fonctionnelle ou pathologique du cerveau. Elle permet d'estimer l'activation de certaines zones corticales en réponse à un stimulus cognitif donné et elle est également utile pour identifier la localisation des activités pathologiques, qui sont les premières étapes de l'étude des activations de réseaux neuronaux sous-jacents. Diverses méthodes d'investigation clinique peuvent être utilisées, des modalités d'imagerie (TEP, IRM) et magnéto-électroencéphalographie (EEG, SEEG, MEG). Nous souhaitions résoudre le problème à partir de données non invasives : les mesures de l'EEG de scalp, elle procure une résolution temporelle à la hauteur des processus étudiés Cependant, la localisation des sources activées à partir d'enregistrements EEG reste une tâche extrêmement difficile en raison de la faible résolution spatiale. Pour ces raisons, nous avons restreint les objectifs de cette thèse à la reconstruction de cartes d’activation des sources corticales de surface. Différentes approches ont été explorées. Les méthodes les plus simples d'imagerie corticales sont basées uniquement sur les caractéristiques géométriques de la tête. La charge de calcul est considérablement réduite et les modèles utilisés sont faciles à mettre en œuvre. Toutefois, ces approches ne fournissent pas d'informations précises sur les générateurs neuronaux et sur leurs propriétés spatiotemporelles. Pour surmonter ces limitations, des techniques plus sophistiquées peuvent être utilisées pour construire un modèle de propagation réaliste, et donc d'atteindre une meilleure reconstruction de sources. Cependant, le problème inverse est sévèrement mal posé, et les contraintes doivent être imposées pour réduire l'espace des solutions. En l'absence de modèle bioanatomique, les méthodes développées sont fondées sur des considérations géométriques de la tête ainsi que la propagation physiologique des sources. Les opérateurs matriciels de rang plein sont appliqués sur les données, de manière similaire à celle effectuée par les méthodes de surface laplacien, et sont basés sur l'hypothèse que les données de surface peuvent être expliquées par un mélange de fonctions de bases radiales linéaires produites par les sources sous-jacentes. Dans la deuxième partie de ces travaux, nous détendons la contrainte-de rang plein en adoptant un modèle de dipôles distribués sur la surface corticale. L'inversion est alors contrainte par une hypothèse de parcimonie, basée sur l'hypothèse physiologique que seuls quelques sources corticales sont simultanément actives ce qui est particulièrement valable dans le contexte des sources d'épilepsie ou dans le cas de tâches cognitives. Pour appliquer cette régularisation, nous considérons simultanément les deux domaines spatiaux et temporels. Nous proposons deux dictionnaires combinés d’atomes spatio-temporels, le premier basé sur une analyse en composantes principales des données, la seconde à l'aide d'une décomposition en ondelettes, plus robuste vis-à-vis du bruit et bien adaptée à la nature non-stationnaire de ces données électrophysiologiques. Toutes les méthodes proposées ont été testées sur des données simulées et comparées aux approches classiques de la littérature. Les performances obtenues sont satisfaisantes et montrent une bonne robustesse vis-à-vis du bruit. Nous avons également validé notre approche sur des données réelles telles que des pointes intercritiques de patients épileptiques expertisées par les neurologues de l'hôpital universitaire de Nancy affiliées au projet. Les localisations estimées sont validées par l'identification de la zone épileptogène obtenue par l'exploration intracérébrale à partir de mesures stéréo EEG
Cortical Source Imaging plays an important role for understanding the functional and pathological brain mechanisms. It links the activation of certain cortical areas in response to a given cognitive stimulus, and allows one to study the co-activation of the underlying functional networks. Among the available acquisition modality, electroencephalographic measurements (EEG) have the great advantage of providing a time resolution of the order of the millisecond, at the scale of the dynamic of the studied process, while being a non-invasive technique often used in clinical routine. However the identification of the activated sources from EEG recordings remains an extremely difficult task because of the low spatial resolution this modality provides, of the strong filtering effect of the cranial bones and errors inherent to the used propagation model. In this work different approaches for the estimation of cortical activity from surface EEG have been explored. The simplest cortical imaging methods are based only on the geometrical characteristics of the head. The computational load is greatly reduced and the used models are easy to implement. However, such approaches do not provide accurate information about the neural generators and on their spatiotemporal properties. To overcome such limitations, more sophisticated techniques can be used to build a realistic propagation model, and thus to reach better source reconstruction by its inversion. However, such inversion problem is severely ill-posed, and constraints have to be imposed to reduce the solution space. We began by reconsidering the cortical source imaging problem by relying mostly on the observations provided by the EEG measurements, when no anatomical modeling is available. The developed methods are based on simple but universal considerations about the head geometry as well as the physiological propagation of the sources. Full-rank matrix operators are applied on the data, similarly as done by Surface Laplacian methods, and are based on the assumption that the surface can be explained by a mixture of linear radial basis functions produced by the underlying sources. In the second part of the thesis, we relax the full-rank constraint by adopting a distributed dipole model constellating the cortical surface. The inversion is constrained by an hypothesis of sparsity, based on the physiological assumption that only a few cortical sources are active simultaneously Such hypothesis is particularly valid in the context of epileptic sources or in the case of cognitive tasks. To apply this regularization, we consider simultaneously both spatial and temporal domains. We propose two combined dictionaries of spatio-temporal atoms, the first based on a principal components analysis of the data, the second using a wavelet decomposition, more robust to noise and well suited to the non-stationary nature of these electrophysiological data. All of the proposed methods have been tested on simulated data and compared to conventional approaches of the literature. The obtained performances are satisfactory and show good robustness to the addition of noise. We have also validated our approach on real epileptic data provided by neurologists of the University Hospital of Nancy affiliated to the project. The estimated locations are consistent with the epileptogenic zone identification obtained by intracerebral exploration based on Stereo-EEG measurements
APA, Harvard, Vancouver, ISO, and other styles
12

Asif, Muhammad Salman. "Primal dual pursuit a homotopy based algorithm for the Dantzig selector /." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24693.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Romberg, Justin; Committee Member: McClellan, James; Committee Member: Mersereau, Russell
APA, Harvard, Vancouver, ISO, and other styles
13

Schmidt, Aurora C. "Scalable Sensor Network Field Reconstruction with Robust Basis Pursuit." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/240.

Full text
Abstract:
We study a scalable approach to information fusion for large sensor networks. The algorithm, field inversion by consensus and compressed sensing (FICCS), is a distributed method for detection, localization, and estimation of a propagating field generated by an unknown number of point sources. The approach combines results in the areas of distributed average consensus and compressed sensing to form low dimensional linear projections of all sensor readings throughout the network, allowing each node to reconstruct a global estimate of the field. Compressed sensing is applied to continuous source localization by quantizing the potential locations of sources, transforming the model of sensor observations to a finite discretized linear model. We study the effects of structured modeling errors induced by spatial quantization and the robustness of ℓ1 penalty methods for field inversion. We develop a perturbations method to analyze the effects of spatial quantization error in compressed sensing and provide a model-robust version of noise-aware basis pursuit with an upperbound on the sparse reconstruction error. Numerical simulations illustrate system design considerations by measuring the performance of decentralized field reconstruction, detection performance of point phenomena, comparing trade-offs of quantization parameters, and studying various sparse estimators. The method is extended to time-varying systems using a recursive sparse estimator that incorporates priors into ℓ1 penalized least squares. This thesis presents the advantages of inter-sensor measurement mixing as a means of efficiently spreading information throughout a network, while identifying sparse estimation as an enabling technology for scalable distributed field reconstruction systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Moscu, Mircea. "Inférence distribuée de topologie de graphe à partir de flots de données." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4081.

Full text
Abstract:
La deuxième décennie du millénaire actuel peut être résumée en une courte phrase : l'essor des données. Le nombre de sources de données s'est multiplié : du streaming audio-vidéo aux réseaux sociaux et à l'Internet des Objets, en passant par les montres intelligentes, les équipements industriels et les véhicules personnels, pour n'en citer que quelques-unes. Le plus souvent, ces sources forment des réseaux afin d'échanger des informations. En conséquence directe, le domaine du Traitement de Signal sur Graphe a prospéré et a évolué. Son but : traiter et donner un sens à tout le déluge de données environnant. Dans ce contexte, le but principal de cette thèse est de développer des méthodes et des algorithmes capables d'utiliser des flots de données, de manière distribuée, afin d'inférer les réseaux sous-jacents qui relient ces flots. Ensuite, ces topologies de réseau estimées peuvent être utilisées avec des outils développés pour le Traitement de Signal sur Graphe afin de traiter et d'analyser les données supportées par des graphes. Après une brève introduction suivie d'exemples motivants, nous développons et proposons d'abord un algorithme en ligne, distribué et adaptatif pour l'inférence de topologies de graphes pour les flots de données qui sont linéairement dépendants. Une analyse de la méthode s'ensuit, afin d'établir des relations entre les performances et les paramètres nécessaires à l'algorithme. Nous menons ensuite une série d'expériences afin de valider l'analyse et de comparer ses performances avec celles d'une autre méthode proposée dans la littérature. La contribution suivante est un algorithme doté des mêmes capacités en ligne, distribuées et adaptatives, mais adapté à l'inférence de liens entre des données qui interagissent de manière non-linéaire. À ce titre, nous proposons un modèle additif simple mais efficace qui utilise l'usine du noyau reproduisant afin de modéliser lesdites non-linéarités. Les résultats de son analyse sont convaincants, tandis que les expériences menées sur des données biomédicales donnent des réseaux estimés qui présentent un comportement prédit par la littérature médicale. Enfin, une troisième proposition d'algorithme est faite, qui vise à améliorer le modèle non-linéaire en lui permettant d'échapper aux contraintes induites par l'additivité. Ainsi, le nouveau modèle proposé est aussi général que possible, et utilise une manière naturelle et intuitive d'imposer la parcimonie des liens, basée sur le concept de dérivés partiels. Nous analysons également l'algorithme proposé, afin d'établir les conditions de stabilité et les relations entre ses paramètres et ses performances. Une série d'expériences est menée, montrant comment le modèle général est capable de mieux saisir les liens non-linéaires entre les données, tandis que les réseaux estimés se comportent de manière cohérente avec les estimations précédentes
The second decade of the current millennium can be summarized in one short phrase: the advent of data. There has been a surge in the number of data sources: from audio-video streaming, social networks and the Internet of Things, to smartwatches, industrial equipment and personal vehicles, just to name a few. More often than not, these sources form networks in order to exchange information. As a direct consequence, the field of Graph Signal Processing has been thriving and evolving. Its aim: process and make sense of all the surrounding data deluge.In this context, the main goal of this thesis is developing methods and algorithms capable of using data streams, in a distributed fashion, in order to infer the underlying networks that link these streams. Then, these estimated network topologies can be used with tools developed for Graph Signal Processing in order to process and analyze data supported by graphs. After a brief introduction followed by motivating examples, we first develop and propose an online, distributed and adaptive algorithm for graph topology inference for data streams which are linearly dependent. An analysis of the method ensues, in order to establish relations between performance and the input parameters of the algorithm. We then run a set of experiments in order to validate the analysis, as well as compare its performance with that of another proposed method of the literature.The next contribution is in the shape of an algorithm endowed with the same online, distributed and adaptive capacities, but adapted to inferring links between data that interact non-linearly. As such, we propose a simple yet effective additive model which makes use of the reproducing kernel machinery in order to model said nonlinearities. The results if its analysis are convincing, while experiments ran on biomedical data yield estimated networks which exhibit behavior predicted by medical literature.Finally, a third algorithm proposition is made, which aims to improve the nonlinear model by allowing it to escape the constraints induced by additivity. As such, the newly proposed model is as general as possible, and makes use of a natural and intuitive manner of imposing link sparsity, based on the concept of partial derivatives. We analyze this proposed algorithm as well, in order to establish stability conditions and relations between its parameters and its performance. A set of experiments are ran, showcasing how the general model is able to better capture nonlinear links in the data, while the estimated networks behave coherently with previous estimates
APA, Harvard, Vancouver, ISO, and other styles
15

Elvira, Clément. "Modèles bayésiens pour l’identification de représentations antiparcimonieuses et l’analyse en composantes principales bayésienne non paramétrique." Thesis, Ecole centrale de Lille, 2017. http://www.theses.fr/2017ECLI0016/document.

Full text
Abstract:
Cette thèse étudie deux modèles paramétriques et non paramétriques pour le changement de représentation. L'objectif des deux modèles diffère. Le premier cherche une représentation en plus grande dimension pour gagner en robustesse. L'objectif est de répartir uniformément l’information d’un signal sur toutes les composantes de sa représentation en plus grande dimension. La recherche d'un tel code s'exprime comme un problème inverse impliquant une régularisation de type norme infinie. Nous proposons une formulation bayésienne du problème impliquant une nouvelle loi de probabilité baptisée démocratique, qui pénalise les fortes amplitudes. Deux algorithmes MCMC proximaux sont présentés pour approcher des estimateurs bayésiens. La méthode non supervisée présentée est appelée BAC-1. Des expériences numériques illustrent les performances de l’approche pour la réduction de facteur de crête. Le second modèle identifie un sous-espace pertinent de dimension réduite à des fins de modélisation. Mais les méthodes probabilistes proposées nécessitent généralement de fixer à l'avance la dimension du sous-espace. Ce travail introduit BNP-PCA, une version bayésienne non paramétrique de l'analyse en composantes principales. La méthode couple une loi uniforme sur les bases orthonormales à un a priori non paramétrique de type buffet indien pour favoriser une utilisation parcimonieuse des composantes principales et aucun réglage n'est nécessaire. L'inférence est réalisée à l'aide des méthodes MCMC. L'estimation de la dimension du sous-espace et le comportement numérique de BNP-PCA sont étudiés. Nous montrons la flexibilité de BNP-PCA sur deux applications
This thesis proposes Bayesian parametric and nonparametric models for signal representation. The first model infers a higher dimensional representation of a signal for sake of robustness by enforcing the information to be spread uniformly. These so called anti-sparse representations are obtained by solving a linear inverse problem with an infinite-norm penalty. We propose in this thesis a Bayesian formulation of anti-sparse coding involving a new probability distribution, referred to as the democratic prior. A Gibbs and two proximal samplers are proposed to approximate Bayesian estimators. The algorithm is called BAC-1. Simulations on synthetic data illustrate the performances of the two proposed samplers and the results are compared with state-of-the art methods. The second model identifies a lower dimensional representation of a signal for modelisation and model selection. Principal component analysis is very popular to perform dimension reduction. The selection of the number of significant components is essential but often based on some practical heuristics depending on the application. Few works have proposed a probabilistic approach to infer the number of significant components. We propose a Bayesian nonparametric principal component analysis called BNP-PCA. The proposed model involves an Indian buffet process to promote a parsimonious use of principal components, which is assigned a prior distribution defined on the manifold of orthonormal basis. Inference is done using MCMC methods. The estimators of the latent dimension are theoretically and empirically studied. The relevance of the approach is assessed on two applications
APA, Harvard, Vancouver, ISO, and other styles
16

Masood, Mudassir. "Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications." Diss., 2015. http://hdl.handle.net/10754/553050.

Full text
Abstract:
Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c) distributed estimation of sparse signals. Extensive experiments are conducted to demonstrate the power and robustness of our proposed sparse signal estimation algorithms. Specifically, we target the problems of a) channel estimation in massive-MIMO, and b) Narrowband interference mitigation in SC-FDMA. We model these problems as sparse recovery problems and demonstrate how these could be solved naturally using the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
17

Kuo, Han-wen. "Deconvolution Problems for Structured Sparse Signal." Thesis, 2021. https://doi.org/10.7916/d8-azkj-5x53.

Full text
Abstract:
This dissertation studies deconvolution problems of how structured sparse signals appear in nature, science and engineering. We discuss about the intrinsic solution to the problem of short-and-sparse deconvolution, how these solutions structured the optimization problem, and how do we design an efficient and practical algorithm base on aforementioned analytical findings. To fully utilized the information of structured sparse signals efficiently, we also propose a sensing method while the sampling acquisition is expansive, and study its sample limit and algorithms for signal recovery with limited samples.
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Chun-Lin. "Sparse Array Signal Processing: New Array Geometries, Parameter Estimation, and Theoretical Analysis." Thesis, 2018. https://thesis.library.caltech.edu/10970/1/Liu_Chun-Lin_2018.pdf.

Full text
Abstract:

Array signal processing focuses on an array of sensors receiving the incoming waveforms in the environment, from which source information, such as directions of arrival (DOA), signal power, amplitude, polarization, and velocity, can be estimated. This topic finds ubiquitous applications in radar, astronomy, tomography, imaging, and communications. In these applications, sparse arrays have recently attracted considerable attention, since they are capable of resolving O(N2) uncorrelated source directions with N physical sensors. This is unlike the uniform linear arrays (ULA), which identify at most N-1 uncorrelated sources with N sensors. These sparse arrays include minimum redundancy arrays (MRA), nested arrays, and coprime arrays. All these arrays have an O(N2)-long central ULA segment in the difference coarray, which is defined as the set of differences between sensor locations. This O(N2) property makes it possible to resolve O(N2) uncorrelated sources, using only N physical sensors.

The main contribution of this thesis is to provide a new direction for array geometry and performance analysis of sparse arrays in the presence of nonidealities. The first part of this thesis focuses on designing novel array geometries that are robust to effects of mutual coupling. It is known that, mutual coupling between sensors has an adverse effect on the estimation of DOA. While there are methods to counteract this through appropriate modeling and calibration, they are usually computationally expensive, and sensitive to model mismatch. On the other hand, sparse arrays, such as MRA, nested arrays, and coprime arrays, have reduced mutual coupling compared to ULA, but all of these have their own disadvantages. This thesis introduces a new array called the super nested array, which has many of the good properties of the nested array, and at the same time achieves reduced mutual coupling. Many theoretical properties are proved and simulations are included to demonstrate the superior performance of super nested arrays in the presence of mutual coupling.

Two-dimensional planar sparse arrays with large difference coarrays have also been known for a long time. These include billboard arrays, open box arrays (OBA), and 2D nested arrays. However, all of them have considerable mutual coupling. This thesis proposes new planar sparse arrays with the same large difference coarrays as the OBA, but with reduced mutual coupling. The new arrays include half open box arrays (HOBA), half open box arrays with two layers (HOBA-2), and hourglass arrays. Among these, simulations show that hourglass arrays have the best estimation performance in presence of mutual coupling.

The second part of this thesis analyzes the performance of sparse arrays from a theoretical perspective. We first study the Cramér-Rao bound (CRB) for sparse arrays, which poses a lower bound on the variances of unbiased DOA estimators. While there exist landmark papers on the study of the CRB in the context of array processing, the closed-form expressions available in the literature are not applicable in the context of sparse arrays for which the number of identifiable sources exceeds the number of sensors. This thesis derives a new expression for the CRB to fill this gap. Based on the proposed CRB expression, it is possible to prove the previously known experimental observation that, when there are more sources than sensors, the CRB stagnates to a constant value as the SNR tends to infinity. It is also possible to precisely specify the relation between the number of sensors and the number of uncorrelated sources such that these sources could be resolved.

Recently, it has been shown that correlation subspaces, which reveal the structure of the covariance matrix, help to improve some existing DOA estimators. However, the bases, the dimension, and other theoretical properties of correlation subspaces remain to be investigated. This thesis proposes generalized correlation subspaces in one and multiple dimensions. This leads to new insights into correlation subspaces and DOA estimation with prior knowledge. First, it is shown that the bases and the dimension of correlation subspaces are fundamentally related to difference coarrays, which were previously found to be important in the study of sparse arrays. Furthermore, generalized correlation subspaces can handle certain forms of prior knowledge about source directions. These results allow one to derive a broad class of DOA estimators with improved performance.

It is empirically known that the coarray structure is susceptible to sensor failures, and the reliability of sparse arrays remains a significant but challenging topic for investigation. This thesis advances a general theory for quantifying such robustness, by studying the effect of sensor failure on the difference coarray. We first present the (k-)essentialness property, which characterizes the combinations of the faulty sensors that shrink the difference coarray. Based on this, the notion of (k-)fragility is proposed to quantify the reliability of sparse arrays with faulty sensors, along with comprehensive studies of their properties. These novel concepts provide quite a few insights into the interplay between the array geometry and its robustness. For instance, for the same number of sensors, it can be proved that ULA is more robust than the coprime array, and the coprime array is more robust than the nested array. Rigorous development of these ideas leads to expressions for the probability of coarray failure, as a function of the probability of sensor failure.

The thesis concludes with some remarks on future directions and open problems.

APA, Harvard, Vancouver, ISO, and other styles
19

Pototskaia, Vlada. "Application of AAK theory for sparse approximation." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-0023-3F4B-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sana, Furrukh. "Efficient Techniques of Sparse Signal Analysis for Enhanced Recovery of Information in Biomedical Engineering and Geosciences." Diss., 2016. http://hdl.handle.net/10754/621865.

Full text
Abstract:
Sparse signals are abundant among both natural and man-made signals. Sparsity implies that the signal essentially resides in a small dimensional subspace. The sparsity of the signal can be exploited to improve its recovery from limited and noisy observations. Traditional estimation algorithms generally lack the ability to take advantage of signal sparsity. This dissertation considers several problems in the areas of biomedical engineering and geosciences with the aim of enhancing the recovery of information by exploiting the underlying sparsity in the problem. The objective is to overcome the fundamental bottlenecks, both in terms of estimation accuracies and required computational resources. In the first part of dissertation, we present a high precision technique for the monitoring of human respiratory movements by exploiting the sparsity of wireless ultra-wideband signals. The proposed technique provides a novel methodology of overcoming the Nyquist sampling constraint and enables robust performance in the presence of noise and interferences. We also present a comprehensive framework for the important problem of extracting the fetal electrocardiogram (ECG) signals from abdominal ECG recordings of pregnant women. The multiple measurement vectors approach utilized for this purpose provides an efficient mechanism of exploiting the common structure of ECG signals, when represented in sparse transform domains, and allows leveraging information from multiple ECG electrodes under a joint estimation formulation. In the second part of dissertation, we adopt sparse signal processing principles for improved information recovery in large-scale subsurface reservoir characterization problems. We propose multiple new algorithms for sparse representation of the subsurface geological structures, incorporation of useful prior information in the estimation process, and for reducing computational complexities of the problem. The techniques presented here enable significantly enhanced imaging of the subsurface earth and result in substantial savings in terms of convergence time, leading to optimized placement of oil wells. This dissertation demonstrates through detailed experimental analysis that the sparse estimation approach not only enables enhanced information recovery in variety of application areas, but also greatly helps in reducing the computational complexities associated with the problems.
APA, Harvard, Vancouver, ISO, and other styles
21

Al-Rabah, Abdullatif R. "Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach." Thesis, 2013. http://hdl.handle.net/10754/291094.

Full text
Abstract:
Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly recommended to be an effective and practical alternative to state-of-art PAPR reduction techniques.
APA, Harvard, Vancouver, ISO, and other styles
22

Prasad, Ranjitha. "Sparse Bayesian Learning For Joint Channel Estimation Data Detection In OFDM Systems." Thesis, 2015. http://etd.iisc.ernet.in/2005/3997.

Full text
Abstract:
Bayesian approaches for sparse signal recovery have enjoyed a long-standing history in signal processing and machine learning literature. Among the Bayesian techniques, the expectation maximization based Sparse Bayesian Learning(SBL) approach is an iterative procedure with global convergence guarantee to a local optimum, which uses a parameterized prior that encourages sparsity under an evidence maximization frame¬work. SBL has been successfully employed in a wide range of applications ranging from image processing to communications. In this thesis, we propose novel, efficient and low-complexity SBL-based algorithms that exploit structured sparsity in the presence of fully/partially known measurement matrices. We apply the proposed algorithms to the problem of channel estimation and data detection in Orthogonal Frequency Division Multiplexing(OFDM) systems. Further, we derive Cram´er Rao type lower Bounds(CRB) for the single and multiple measurement vector SBL problem of estimating compressible vectors and their prior distribution parameters. The main contributions of the thesis are as follows: We derive Hybrid, Bayesian and Marginalized Cram´er Rao lower bounds for the problem of estimating compressible vectors drawn from a Student-t prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error(MSE) in the estimates. Through simulations, we demonstrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector. OFDM is a well-known multi-carrier modulation technique that provides high spectral efficiency and resilience to multi-path distortion of the wireless channel It is well-known that the impulse response of a wideband wireless channel is approximately sparse, in the sense that it has a small number of significant components relative to the channel delay spread. In this thesis, we consider the estimation of the unknown channel coefficients and its support in SISO-OFDM systems using a SBL framework. We propose novel pilot-only and joint channel estimation and data detection algorithms in block-fading and time-varying scenarios. In the latter case, we use a first order auto-regressive model for the time-variations, and propose recursive, low-complexity Kalman filtering based algorithms for channel estimation. Monte Carlo simulations illustrate the efficacy of the proposed techniques in terms of the MSE and coded bit error rate performance. • Multiple Input Multiple Output(MIMO) combined with OFDM harnesses the inherent advantages of OFDM along with the diversity and multiplexing advantages of a MIMO system. The impulse response of wireless channels between the Nt transmit and Nr receive antennas of a MIMO-OFDM system are group approximately sparse(ga-sparse),i.e. ,the Nt Nr channels have a small number of significant paths relative to the channel delay spread, and the time-lags of the significant paths between transmit and receive antenna pairs coincide. Often, wire¬less channels are also group approximately-cluster sparse(ga-csparse),i.e.,every ga-sparse channel consists of clusters, where a few clusters have all strong components while most clusters have all weak components. In this thesis, we cast the problem of estimating the ga-sparse and ga-csparse block-fading and time-varying channels using a multiple measurement SBL framework. We propose a bouquet of novel algorithms for MIMO-OFDM systems that generalize the algorithms proposed in the context of SISO-OFDM systems. The efficacy of the proposed techniques are demonstrated in terms of MSE and coded bit error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Lopes, Bruno Miguel de Carvalho. "Channel estimation with TCH codes for machine-type communications." Master's thesis, 2017. http://hdl.handle.net/10071/15456.

Full text
Abstract:
TCH codes possess several properties that allow us to use them efficiently in various applications. One of these applications is channel estimation and, in this dissertation, it is studied the performance of TCH codes to estimate the channel in an Orthogonal Frequency Division Multiplexing system, regarding Machine-Type Communications. Bit error rate performance results were obtained by executing simulations that allowed the evaluation of the impact of using two different pilot techniques, such as data multiplexed and implicit pilots, different pilot power levels and different modulations, QPSK and 64-QAM. Pilots based on TCH codes are also compared with other conventional pilots. Results show that TCH codes have a very positive and reliable performance. Joint timing synchronization and channel estimation is also performed using different sparse based approaches, such as Orthogonal Matching Pursuit, L1- regularized and Iterative Reweighted L1. TCH codes are compared against different sequence types, namely Zadoff-Chu sequences and pseudorandom codewords, and variations in the pilot size, the channel length and the observation window size are executed in order to understand their effects. Results ultimately illustrate that TCH codes can be effectively used in joint channel estimation and synchronization, managing to withstand worst simulation conditions better than its counterparts. It is also proven that compressed sensing can successfully be utilized in joint synchronization and channel estimation, an area where its use has not been very explored.
Os códigos TCH possuem várias propriedades que nos permitem usá-los eficientemente em diversas aplicações. Uma delas é a estimação de canal e nesta dissertação é estudado o desempenho dos códigos TCH em estimação de canal num sistema OFDM, tendo em conta as comunicações Machine-Type. Resultados que ilustram a taxa de erro de bit foram obtidos através de simulações que permitem avaliar o impacto de usar diferentes técnicas de pilotos, nomeadamente multiplexados e implícitos, diferentes valores de potência para os pilotos e diferentes modulações, QPSK e 64-QAM. Também é feita a comparação entre os pilotos TCH e pilotos convencionais. Os resultados mostram que os pilotos TCH tem um desempenho muito positivo e confiável, dentro dos parâmetros testados. Também é efetuado o estudo de sincronização e estimação de canal conjunta usando métodos esparsos como o OMP, o L1-regularized e o Iterative Reweighted L1. Os códigos TCH são comparados com outros tipos de sequências, tais como as sequências Zadoff-Chu e os códigos pseudo-aleatórios. São consideradas variações no tamanho dos pilotos, no comprimento do canal e no tamanho da janela de observação para perceber quais são os seus efeitos no desempenho. Os resultados demonstram que os códigos TCH podem ser utilizados com sucesso em estimação de canal e sincronização conjunta e conseguem aguentar condições adversas de simulação melhor que os outros pilotos utilizados. Também é provado que compressed sensing pode ser utilizado com sucesso em sincronização e estimação conjunta, que é uma área onde o seu uso ainda não foi explorado aprofundadamente.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography