Gotowa bibliografia na temat „Multiple Sparse Bayesian Learning”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multiple Sparse Bayesian Learning”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Rozprawy doktorskie na temat "Multiple Sparse Bayesian Learning"

1

Higson, Edward John. "Bayesian methods and machine learning in astrophysics." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289728.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with methods for Bayesian inference and their applications in astrophysics. We principally discuss two related themes: advances in nested sampling (Chapters 3 to 5), and Bayesian sparse reconstruction of signals from noisy data (Chapters 6 and 7). Nested sampling is a popular method for Bayesian computation which is widely used in astrophysics. Following the introduction and background material in Chapters 1 and 2, Chapter 3 analyses the sampling errors in nested sampling parameter estimation and presents a method for estimating them numerically for a single nested sampling calculation. Chapter 4 introduces diagnostic tests for detecting when software has not performed the nested sampling algorithm accurately, for example due to missing a mode in a multimodal posterior. The uncertainty estimates and diagnostics in Chapters 3 and 4 are implemented in the $\texttt{nestcheck}$ software package, and both chapters describe an astronomical application of the techniques introduced. Chapter 5 describes dynamic nested sampling: a generalisation of the nested sampling algorithm which can produce large improvements in computational efficiency compared to standard nested sampling. We have implemented dynamic nested sampling in the $\texttt{dyPolyChord}$ and $\texttt{perfectns}$ software packages. Chapter 6 presents a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including examples of processing astronomical images. The numerical implementation uses dynamic nested sampling, and uncertainties are calculated using the methods introduced in Chapters 3 and 4. Chapter 7 applies our Bayesian sparse reconstruction framework to artificial neural networks, where it allows the optimum network architecture to be determined by treating the number of nodes and hidden layers as parameters. We conclude by suggesting possible areas of future research in Chapter 8.
Style APA, Harvard, Vancouver, ISO itp.
2

Parisi, Simone [Verfasser], Jan [Akademischer Betreuer] Peters, and Joschka [Akademischer Betreuer] Boedeker. "Reinforcement Learning with Sparse and Multiple Rewards / Simone Parisi ; Jan Peters, Joschka Boedeker." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1203301545/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Tandon, Prateek. "Bayesian Aggregation of Evidence for Detection and Characterization of Patterns in Multiple Noisy Observations." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/658.

Pełny tekst źródła
Streszczenie:
Effective use of Machine Learning to support extracting maximal information from limited sensor data is one of the important research challenges in robotic sensing. This thesis develops techniques for detecting and characterizing patterns in noisy sensor data. Our Bayesian Aggregation (BA) algorithmic framework can leverage data fusion from multiple low Signal-To-Noise Ratio (SNR) sensor observations to boost the capability to detect and characterize the properties of a signal generating source or process of interest. We illustrate our research with application to the nuclear threat detection domain. Developed algorithms are applied to the problem of processing the large amounts of gamma ray spectroscopy data that can be produced in real-time by mobile radiation sensors. The thesis experimentally shows BA’s capability to boost sensor performance in detecting radiation sources of interest, even if the source is faint, partiallyoccluded, or enveloped in the noisy and variable radiation background characteristic of urban scenes. In addition, BA provides simultaneous inference of source parameters such as the source intensity or source type while detecting it. The thesis demonstrates this capability and also develops techniques to efficiently optimize these parameters over large possible setting spaces. Methods developed in this thesis are demonstrated both in simulation and in a radiation-sensing backpack that applies robotic localization techniques to enable indoor surveillance of radiation sources. The thesis further improves the BA algorithm’s capability to be robust under various detection scenarios. First, we augment BA with appropriate statistical models to improve estimation of signal components in low photon count detection, where the sensor may receive limited photon counts from either source and/or background. Second, we develop methods for online sensor reliability monitoring to create algorithms that are resilient to possible sensor faults in a data pipeline containing one or multiple sensors. Finally, we develop Retrospective BA, a variant of BA that allows reinterpretation of past sensor data in light of new information about percepts. These Retrospective capabilities include the use of Hidden Markov Models in BA to allow automatic correction of a sensor pipeline when sensor malfunction may be occur, an Anomaly- Match search strategy to efficiently optimize source hypotheses, and prototyping of a Multi-Modal Augmented PCA to more flexibly model background and nuisance source fluctuations in a dynamic environment.
Style APA, Harvard, Vancouver, ISO itp.
4

Ticlavilca, Andres M. "Multivariate Bayesian Machine Learning Regression for Operation and Management of Multiple Reservoir, Irrigation Canal, and River Systems." DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/600.

Pełny tekst źródła
Streszczenie:
The principal objective of this dissertation is to develop Bayesian machine learning models for multiple reservoir, irrigation canal, and river system operation and management. These types of models are derived from the emerging area of machine learning theory; they are characterized by their ability to capture the underlying physics of the system simply by examination of the measured system inputs and outputs. They can be used to provide probabilistic predictions of system behavior using only historical data. The models were developed in the form of a multivariate relevance vector machine (MVRVM) that is based on a sparse Bayesian learning machine approach for regression. Using this Bayesian approach, a predictive confidence interval is obtained from the model that captures the uncertainty of both the model and the data. The models were applied to the multiple reservoir, canal and river system located in the regulated Lower Sevier River Basin in Utah. The models were developed to perform predictions of multi-time-ahead releases of multiple reservoirs, diversions of multiple canals, and streamflow and water loss/gain in a river system. This research represents the first attempt to use a multivariate Bayesian learning regression approach to develop simultaneous multi-step-ahead predictions with predictive confidence intervals for multiple outputs in a regulated river basin system. These predictions will be of potential value to reservoir and canal operators in identifying the best decisions for operation and management of irrigation water supply systems.
Style APA, Harvard, Vancouver, ISO itp.
5

Jin, Junyang. "Novel methods for biological network inference : an application to circadian Ca2+ signaling network." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/285323.

Pełny tekst źródła
Streszczenie:
Biological processes involve complex biochemical interactions among a large number of species like cells, RNA, proteins and metabolites. Learning these interactions is essential to interfering artificially with biological processes in order to, for example, improve crop yield, develop new therapies, and predict new cell or organism behaviors to genetic or environmental perturbations. For a biological process, two pieces of information are of most interest. For a particular species, the first step is to learn which other species are regulating it. This reveals topology and causality. The second step involves learning the precise mechanisms of how this regulation occurs. This step reveals the dynamics of the system. Applying this process to all species leads to the complete dynamical network. Systems biology is making considerable efforts to learn biological networks at low experimental costs. The main goal of this thesis is to develop advanced methods to build models for biological networks, taking the circadian system of Arabidopsis thaliana as a case study. A variety of network inference approaches have been proposed in the literature to study dynamic biological networks. However, many successful methods either require prior knowledge of the system or focus more on topology. This thesis presents novel methods that identify both network topology and dynamics, and do not depend on prior knowledge. Hence, the proposed methods are applicable to general biological networks. These methods are initially developed for linear systems, and, at the cost of higher computational complexity, can also be applied to nonlinear systems. Overall, we propose four methods with increasing computational complexity: one-to-one, combined group and element sparse Bayesian learning (GESBL), the kernel method and reversible jump Markov chain Monte Carlo method (RJMCMC). All methods are tested with challenging dynamical network simulations (including feedback, random networks, different levels of noise and number of samples), and realistic models of circadian system of Arabidopsis thaliana. These simulations show that, while the one-to-one method scales to the whole genome, the kernel method and RJMCMC method are superior for smaller networks. They are robust to tuning variables and able to provide stable performance. The simulations also imply the advantage of GESBL and RJMCMC over the state-of-the-art method. We envision that the estimated models can benefit a wide range of research. For example, they can locate biological compounds responsible for human disease through mathematical analysis and help predict the effectiveness of new treatments.
Style APA, Harvard, Vancouver, ISO itp.
6

Yazdani, Akram. "Statistical Approaches in Genome-Wide Association Studies." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423743.

Pełny tekst źródła
Streszczenie:
Genome-wide association studies, GWAS, typically contain hundreds of thousands single nucleotide polymorphisms, SNPs, genotyped for few numbers of samples. The aim of these studies is to identify regions harboring SNPs or to predict the outcomes of interest. Since the number of predictors in the GWAS far exceeds the number of samples, it is impossible to analyze the data with classical statistical methods. In the current GWAS, the widely applied methods are based on single marker analysis that does assess association of each SNP with the complex traits independently. Because of the low power of this analysis for detecting true association, simultaneous analysis has recently received more attention. The new statistical methods for simultaneous analysis in high dimensional settings have a limitation of disparity between the number of predictors and the number of samples. Therefore, reducing the dimensionality of the set of SNPs is required. This thesis reviews single marker analysis and simultaneous analysis with a focus on Bayesian methods. It addresses the weaknesses of these approaches with reference to recent literature and illustrating simulation studies. To bypass these problems, we first attempt to reduce dimension of the set of SNPs with random projection technique. Since this method does not improve the predictive performance of the model, we present a new two-stage approach that is a hybrid method of single and simultaneous analyses. This full Bayesian approach selects the most promising SNPs in the first stage by evaluating the impact of each marker independently. In the second stage, we develop a hierarchical Bayesian model to analyze the impact of selected markers simultaneously. The model that accounts for related samples places the local-global shrinkage prior on marker effects in order to shrink small effects to zero while keeping large effects relatively large. The prior specification on marker effects, which is hierarchical representation of generalized double Pareto, improves the predictive performance. Finally, we represent the result of real SNP-data analysis through single-maker study and the new two-stage approach.<br>Lo Studio di Associazione Genome-Wide, GWAS, tipicamente comprende centinaia di migliaia di polimorfismi a singolo nucleotide, SNPs, genotipizzati per pochi campioni. L'obiettivo di tale studio consiste nell'individuare le regioni cruciali SNPs e prevedere gli esiti di una variabile risposta. Dal momento che il numero di predittori è di gran lunga superiore al numero di campioni, non è possibile condurre l'analisi dei dati con metodi statistici classici. GWAS attuali, i metodi negli maggiormente utilizzati si basano sull'analisi a marcatore unico, che valuta indipendentemente l'associazione di ogni SNP con i tratti complessi. A causa della bassa potenza dell'analisi a marcatore unico nel rilevamento delle associazioni reali, l'analisi simultanea ha recentemente ottenuto più attenzione. I recenti metodi per l'analisi simultanea nel multidimensionale hanno una limitazione sulla disparità tra il numero di predittori e il numero di campioni. Pertanto, è necessario ridurre la dimensionalità dell'insieme di SNPs. Questa tesi fornisce una panoramica dell'analisi a marcatore singolo e dell'analisi simultanea, focalizzandosi su metodi Bayesiani. Vengono discussi i limiti di tali approcci in relazione ai GWAS, con riferimento alla letteratura recente e utilizzando studi di simulazione. Per superare tali problemi, si è cercato di ridurre la dimensione dell'insieme di SNPs con una tecnica a proiezione casuale. Poiché questo approccio non comporta miglioramenti nella accuratezza predittiva del modello, viene quindi proposto un approccio in due fasi, che risulta essere un metodo ibrido di analisi singola e simultanea. Tale approccio, completamente Bayesiano, seleziona gli SNPs più promettenti nella prima fase valutando l'impatto di ogni marcatore indipendentemente. Nella seconda fase, viene sviluppato un modello gerarchico Bayesiano per analizzare contemporaneamente l'impatto degli indicatori selezionati. Il modello che considera i campioni correlati pone una priori locale-globale ristretta sugli effetti dei marcatori. Tale prior riduce a zero gli effetti piccoli, mentre mantiene gli effetti più grandi relativamente grandi. Le priori specificate sugli effetti dei marcatori sono rappresentazioni gerarchiche della distribuzione Pareto doppia; queste a priori migliorano le prestazioni predittive del modello. Infine, nella tesi vengono riportati i risultati dell'analisi su dati reali di SNP basate sullo studio a marcatore singolo e sul nuovo approccio a due stadi.
Style APA, Harvard, Vancouver, ISO itp.
7

Deshpande, Hrishikesh. "Dictionary learning for pattern classification in medical imaging." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S032/document.

Pełny tekst źródła
Streszczenie:
La plupart des signaux naturels peuvent être représentés par une combinaison linéaire de quelques atomes dans un dictionnaire. Ces représentations parcimonieuses et les méthodes d'apprentissage de dictionnaires (AD) ont suscité un vif intérêt au cours des dernières années. Bien que les méthodes d'AD classiques soient efficaces dans des applications telles que le débruitage d'images, plusieurs méthodes d'AD discriminatifs ont été proposées pour obtenir des dictionnaires mieux adaptés à la classification. Dans ce travail, nous avons montré que la taille des dictionnaires de chaque classe est un facteur crucial dans les applications de reconnaissance des formes lorsqu'il existe des différences de variabilité entre les classes, à la fois dans le cas des dictionnaires classiques et des dictionnaires discriminatifs. Nous avons validé la proposition d'utiliser différentes tailles de dictionnaires, dans une application de vision par ordinateur, la détection des lèvres dans des images de visages, ainsi que par une application médicale plus complexe, la classification des lésions de scléroses en plaques (SEP) dans des images IRM multimodales. Les dictionnaires spécifiques à chaque classe sont appris pour les lésions et les tissus cérébraux sains. La taille du dictionnaire pour chaque classe est adaptée en fonction de la complexité des données. L'algorithme est validé à l'aide de 52 séquences IRM multimodales de 13 patients atteints de SEP<br>Most natural signals can be approximated by a linear combination of a few atoms in a dictionary. Such sparse representations of signals and dictionary learning (DL) methods have received a special attention over the past few years. While standard DL approaches are effective in applications such as image denoising or compression, several discriminative DL methods have been proposed to achieve better image classification. In this thesis, we have shown that the dictionary size for each class is an important factor in the pattern recognition applications where there exist variability difference between classes, in the case of both the standard and discriminative DL methods. We validated the proposition of using different dictionary size based on complexity of the class data in a computer vision application such as lips detection in face images, followed by more complex medical imaging application such as classification of multiple sclerosis (MS) lesions using MR images. The class specific dictionaries are learned for the lesions and individual healthy brain tissues, and the size of the dictionary for each class is adapted according to the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients
Style APA, Harvard, Vancouver, ISO itp.
8

Chen, Cong. "High-Dimensional Generative Models for 3D Perception." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103948.

Pełny tekst źródła
Streszczenie:
Modern robotics and automation systems require high-level reasoning capability in representing, identifying, and interpreting the three-dimensional data of the real world. Understanding the world's geometric structure by visual data is known as 3D perception. The necessity of analyzing irregular and complex 3D data has led to the development of high-dimensional frameworks for data learning. Here, we design several sparse learning-based approaches for high-dimensional data that effectively tackle multiple perception problems, including data filtering, data recovery, and data retrieval. The frameworks offer generative solutions for analyzing complex and irregular data structures without prior knowledge of data. The first part of the dissertation proposes a novel method that simultaneously filters point cloud noise and outliers as well as completing missing data by utilizing a unified framework consisting of a novel tensor data representation, an adaptive feature encoder, and a generative Bayesian network. In the next section, a novel multi-level generative chaotic Recurrent Neural Network (RNN) has been proposed using a sparse tensor structure for image restoration. In the last part of the dissertation, we discuss the detection followed by localization, where we discuss extracting features from sparse tensors for data retrieval.<br>Doctor of Philosophy<br>The development of automation systems and robotics brought the modern world unrivaled affluence and convenience. However, the current automated tasks are mainly simple repetitive motions. Tasks that require more artificial capability with advanced visual cognition are still an unsolved problem for automation. Many of the high-level cognition-based tasks require the accurate visual perception of the environment and dynamic objects from the data received from the optical sensor. The capability to represent, identify and interpret complex visual data for understanding the geometric structure of the world is 3D perception. To better tackle the existing 3D perception challenges, this dissertation proposed a set of generative learning-based frameworks on sparse tensor data for various high-dimensional robotics perception applications: underwater point cloud filtering, image restoration, deformation detection, and localization. Underwater point cloud data is relevant for many applications such as environmental monitoring or geological exploration. The data collected with sonar sensors are however subjected to different types of noise, including holes, noise measurements, and outliers. In the first chapter, we propose a generative model for point cloud data recovery using Variational Bayesian (VB) based sparse tensor factorization methods to tackle these three defects simultaneously. In the second part of the dissertation, we propose an image restoration technique to tackle missing data, which is essential for many perception applications. An efficient generative chaotic RNN framework has been introduced for recovering the sparse tensor from a single corrupted image for various types of missing data. In the last chapter, a multi-level CNN for high-dimension tensor feature extraction for underwater vehicle localization has been proposed.
Style APA, Harvard, Vancouver, ISO itp.
9

Subramanian, Harshavardhan. "Combining scientific computing and machine learning techniques to model longitudinal outcomes in clinical trials." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176427.

Pełny tekst źródła
Streszczenie:
Scientific machine learning (SciML) is a new branch of AI research at the edge of scientific computing (Sci) and machine learning (ML). It deals with efficient amalgamation of data-driven algorithms along with scientific computing to discover the dynamics of the time-evolving process. The output of such algorithms is represented in the form of a governing equation(s) (e.g., ordinary differential equation(s), ODE(s)), which one can solve then for any time point and, thus, obtain a rigorous prediction.  In this thesis, we present a methodology on how to incorporate the SciML approach in the context of clinical trials to predict IPF disease progression in the form of governing equation. Our proposed methodology also quantifies the uncertainties associated with the model by fitting 95\% high density interval (HDI) for the ODE parameters and 95\% posterior prediction interval for posterior predicted samples. We have also investigated the possibility of predicting later outcomes by using the observations collected at early phase of the study. We were successful in combining ML techniques, statistical methodologies and scientific computing tools such as bootstrap sampling, cubic spline interpolation, Bayesian inference and sparse identification of nonlinear dynamics (SINDy) to discover the dynamics behind the efficacy outcome as well as in quantifying the uncertainty of the parameters of the governing equation in the form of 95 \% HDI intervals. We compared the resulting model with the existed disease progression model described by the Weibull function. Based on the mean squared error (MSE) criterion between our ODE approximated values and population means of respective datasets, we achieved the least possible MSE of 0.133,0.089,0.213 and 0.057. After comparing these MSE values with the MSE values obtained after using Weibull function, for the third dataset and pooled dataset, our ODE model performed better in reducing error than the Weibull baseline model by 7.5\% and 8.1\%, respectively. Whereas for the first and second datasets, the Weibull model performed better in reducing errors by 1.5\% and 1.2\%, respectively. Comparing the overall performance in terms of MSE, our proposed model approximates the population means better in all the cases except for the first and second datasets, assuming the latter case's error margin is very small. Also, in terms of interpretation, our dynamical system model contains the mechanistic elements that can explain the decay/acceleration rate of the efficacy endpoint, which is missing in the Weibull model. However, our approach had a limitation in predicting final outcomes using a model derived from  24, 36, 48 weeks observations with good accuracy where as on the contrast, the Weibull model do not possess the predicting capability. However, the extrapolated trend based on 60 weeks of data was found to be close to population mean and the ODE model built on 72 weeks of data. Finally we highlight potential questions for the future work.
Style APA, Harvard, Vancouver, ISO itp.
10

Francisco, André Biasin Segalla. "Esparsidade estruturada em reconstrução de fontes de EEG." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-13052018-112615/.

Pełny tekst źródła
Streszczenie:
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos.<br>Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii