Dissertations / Theses on the topic 'Paramètre de régularisation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 21 dissertations / theses for your research on the topic 'Paramètre de régularisation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Gérard, Anthony. "Bruit de raie des ventilateurs axiaux : Estimation des sources aéroacoustiques par modèles inverse et Méthodes de contrôle." Phd thesis, Université de Poitiers, 2006. http://tel.archives-ouvertes.fr/tel-00162200.
Full textVaiter, Samuel. "Régularisations de Faible Complexité pour les Problèmes Inverses." Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01026398.
Full textDesbat, Laurent. "Critères de choix des paramètres de régularisation : application à la déconvolution." Grenoble 1, 1990. http://www.theses.fr/1990GRE10058.
Full textTran, Duc Toan. "Reconstruction de sollicitations dynamiques par méthodes inverses." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10146.
Full textIn the field of the engineering, knowing the load applied on the structure which allows to solve the direct problem of which the results are given the field of displacement and strain in a structure. It is possible to perform a dimensioning. However, sometimes this load must be identified a posteriori. Unfortunately, it is not always possible to measure this load. Thus, for example, we do not know a priori where it will be loaded, either it is not possible to place a sensor without damaging it or needs too much space. We then have to use indirect measures of displacement, strain, acceleration and then we are lead to solve the inverse problems which are generally an ill-posed. It is then necessary to add one (or more) conditions to obtain a unique and stable solution: it is the regularization of the problem. These techniques are well known and their development is due to the use of the singular value decomposition of the transfer matrix. However, they require the use of an additional parameter that weights this additional condition: the determination of this parameter is difficult. Few studies having been realized in way the usual regularization methods of (Tikhonov and truncation of the (G)SVD), in association with the various criteria for determining the regularization parameter and the various possible responses, we conducted a such work, to draw conclusions on the optimal methodology. It has been highlighted that the measurement of the acceleration associated with a criterion involving the derived signal to reconstruct generally gives the best results via the GCV criterion to determine the regularization parameter. These methods suppose that the location of the loading area is known. We also were interested to deduct this loading area while trying to reconstruct load that is identically zero. This identification was performed easily that has little load to identify compared to the number of measurements available. However such identification is difficult when there are no more measures than loads to identify. Finally we turned to the identification of loading with the plastic structure. We then tried to reconstruct the load assuming that the structure remains linear-elastic, while it was plasticized: we used the method of the double load and performed simulations using the software ls-dyna. The reconstructed load then shows a static component reflecting the residual strain in the structure. In this case, the response used to identify the load is a strain in a non-plasticized zone
Dejean-Viellard, Catherine. "Etude des techniques de régularisation en radiothérapie conformationnelle avec modulation d'intensité et évaluation quantitative des distributions de dose optimales." Toulouse 3, 2003. http://www.theses.fr/2003TOU30195.
Full textNus, Ludivine. "Méthodes rapides de traitement d’images hyperspectrales. Application à la caractérisation en temps réel du matériau bois." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0163/document.
Full textThis PhD dissertation addresses the problem of on-line unmixing of hyperspectral images acquired by a pushbroom imaging system, for real-time characterization of wood. The first part of this work proposes an on-line mixing model based on non-negative matrix factorization. Based on this model, three algorithms for on-line sequential unmixing, using multiplicative update rules, the Nesterov optimal gradient and the ADMM optimization (Alternating Direction Method of Multipliers), respectively, are developed. These algorithms are specially designed to perform the unmixing in real time, at the pushbroom imager acquisition rate. In order to regularize the estimation problem (generally ill-posed), two types of constraints on the endmembers are used: a minimum dispersion constraint and a minimum volume constraint. A method for the unsupervised estimation of the regularization parameter is also proposed, by reformulating the on-line hyperspectral unmixing problem as a bi-objective optimization. In the second part of this manuscript, we propose an approach for handling the variation in the number of sources, i.e. the rank of the decomposition, during the processing. Thus, the previously developed on-line algorithms are modified, by introducing a hyperspectral library learning stage as well as sparse constraints allowing to select only the active sources. Finally, the third part of this work consists in the application of these approaches to the detection and the classification of the singularities of wood
Seyed, Aghamiry Seyed Hossein. "Imagerie sismique multi-paramètre par reconstruction de champs d'ondes : apport de la méthode des multiplicateurs de Lagrange avec directions alternées (ADMM) et des régularisations hybrides." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4090.
Full textFull Waveform Inversion (FWI) is a PDE-constrained optimization which reconstructs subsurface parameters from sparse measurements of seismic wavefields. FWI generally relies on local optimization techniques and a reduced-space approach where the wavefields are eliminated from the variables. In this setting, two bottlenecks of FWI are nonlinearity and ill-posedness. One source of nonlinearity is cycle skipping, which drives the inversion to spurious minima when the starting subsurface model is not kinematically accurate enough. Ill-posedness can result from incomplete subsurface illumination, noise and parameter cross-talks. This thesis aims to mitigate these pathologies with new optimization and regularization strategies. I first improve the wavefield reconstruction method (WRI). WRI extends the FWI search space by computing wavefields with a relaxation of the wave equation to match the data from inaccurate parameters. Then, the parameters are updated by minimizing wave equation errors with either alternating optimization or variable projection. In the former case, WRI breaks down FWI into to linear subproblems thanks to wave equation bilinearity. WRI was initially implemented with a penalty method, which requires a tedious adaptation of the penalty parameter in iterations. Here, I replace the penalty method by the alternating-direction method of multipliers (ADMM). I show with numerical examples how ADMM conciliates the search space extension and the accuracy of the solution at the convergence point with fixed penalty parameters thanks to the dual ascent update of the Lagrange multipliers. The second contribution is the implementation of bound constraints and non smooth Total Variation (TV) regularization in ADMM-based WRI. Following the Split Bregman method, suitable auxiliary variables allow for the de-coupling of the ℓ1 and ℓ2 subproblems, the former being solved efficiently with proximity operators. Then, I combine Tikhonov and TV regularizations by infimal convolution to account for the different statistical properties of the subsurface (smoothness and blockiness). At the next step, I show the ability of sparse promoting regularization in reconstruction the model when ultralong offset sparse fixed-spread acquisition such as those carried out with OBN are used. This thesis continues with the extension of the ADMM-based WRI to multiparameter reconstruction in vertical transversely isotropic (VTI) acoustic media. I first show that the bilinearity of the wave equation is satisfied for the elastodynamic equations. I discuss the joint reconstruction of the vertical wavespeed and epsilon in VTI media. Second, I develop ADMM-based WRI for attenuation imaging, where I update wavefield, squared-slowness, and attenuation in an alternating mode since viscoacoustic wave equation can be approximated, with a high degree of accuracy, as a multilinear equation. This alternating solving provides the necessary flexibility to taylor the regularization to each parameter class and invert large data sets. Then, I overcome some limitations of ADMM-based WRI when a crude initial model is used. In this case, the reconstructed wavefields are accurate only near the receivers. The inaccuracy of phase of the wavefields may be the leading factor which drives the inversion towards spurious minimizers. To mitigate the role of the phase during the early iterations, I update the parameters with phase retrieval, a process which reconstructs a signal from magnitude of linear mesurements. This approach combined with efficient regularizations leads to more accurate reconstruction of the shallow structure, which is decisive to drive ADMM-based WRI toward good solutions at higher frequencies. The last part of this PhD is devoted to time-domain WRI, where a challenge is to perform accurate wavefield reconstruction with acceptable computational cost
Marteau, Clément. "Recherche d'inégalités oracles pour des problèmes inverses." Phd thesis, Université de Provence - Aix-Marseille I, 2007. http://tel.archives-ouvertes.fr/tel-00384095.
Full textoracle permet de comparer, sans aucune hypothèse sur la fonction cible $f$ et d'un point de vue non-asymptotique, les performances de $f^{\star}$ à celles du meilleur estimateur dans $\Lambda$
connaissant $f$. Dans l'optique d'obtenir de telles inégalités, cette thèse s'articule autour de deux objectifs: une meilleure compréhension des problèmes inverses lorsque l'opérateur est
mal-connu et l'extension de l'algorithme de minimisation de l'enveloppe du risque (RHM) à un domaine d'application plus large.
La connaissance complète de l'opérateur A est en effet une hypothèse implicite dans la plupart des méthodes existantes. Il est cependant raisonnable de penser que ce dernier puisse être en partie, voire totalement inconnu. Dans un premier temps, nous généralisons donc la méthode de Stein par blocs pénalisée ainsi que l'algorithme RHM à cette situation. Ce dernier, initié par L. Cavalier et Y. Golubev, améliore considérablement les performances de la traditionnelle méthode d'estimation du risque sans biais. Cependant, cette nouvelle procédure ne concerne que les estimateurs par projection. En pratique, ces derniers sont souvent moins performants que les estimateurs de Tikhonov ou les procédures itératives, dans un certain sens beaucoup plus fines. Dans la dernière partie, nous étendons donc l'utilisation de l'enveloppe du risque à une gamme beaucoup plus large d'estimateurs.
Tardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.
Full textLet Y be a Gaussian vector distributed according to N (m,sigma²Idn) and X a matrix of dimension n x p with Y observed, m unknown, sigma and X known. In the linear model, m is assumed to be a linear combination of the columns of X In small dimension, when n ≥ p and ker (X) = 0, there exists a unique parameter Beta* such that m = X Beta*; then we can rewrite Y = Beta* + Epsilon. In the small-dimensional linear Gaussian model framework, we construct a new multiple testing procedure controlling the FWER to test the null hypotheses Beta*i = 0 for i belongs to [[1,p]]. This procedure is applied in metabolomics through the freeware ASICS available online. ASICS allows to identify and to qualify metabolites via the analyse of RMN spectra. In high dimension, when n < p we have ker (X) ≠ 0 consequently the parameter Beta* described above is no longer unique. In the noiseless case when Sigma = 0, implying thus Y = m, we show that the solutions of the linear system of equation Y = X Beta having a minimal number of non-zero components are obtained via the lalpha with alpha small enough
Pascal, Barbara. "Estimation régularisée d'attributs fractals par minimisation convexe pour la segmentation de textures : formulations variationnelles conjointes, algorithmes proximaux rapides et sélection non supervisée des paramètres de régularisation; Applications à l'étude du frottement solide et de la microfluidique des écoulements multiphasiques." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN042.
Full textIn this doctoral thesis several scale-free texture segmentation procedures based on two fractal attributes, the Hölder exponent, measuring the local regularity of a texture, and local variance, are proposed.A piecewise homogeneous fractal texture model is built, along with a synthesis procedure, providing images composed of the aggregation of fractal texture patches with known attributes and segmentation. This synthesis procedure is used to evaluate the proposed methods performance.A first method, based on the Total Variation regularization of a noisy estimate of local regularity, is illustrated and refined thanks to a post-processing step consisting in an iterative thresholding and resulting in a segmentation.After evidencing the limitations of this first approach, deux segmentation methods, with either "free" or "co-located" contours, are built, taking in account jointly the local regularity and the local variance.These two procedures are formulated as convex nonsmooth functional minimization problems.We show that the two functionals, with "free" and "co-located" penalizations, are both strongly-convex. and compute their respective strong convexity moduli.Several minimization schemes are derived, and their convergence speed are compared.The segmentation performance of the different methods are evaluated over a large amount of synthetic data in configurations of increasing difficulty, as well as on real world images, and compared to state-of-the-art procedures, including convolutional neural networks.An application for the segmentation of multiphasic flow through a porous medium experiment images is presented.Finally, a strategy for automated selection of the hyperparameters of the "free" and "co-located" functionals is built, inspired from the SURE estimator of the quadratic risk
Bashtova, Kateryna. "Modélisation et identification de paramètres pour les empreintes des faisceaux de haute énergie." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4112/document.
Full textThe technological progress demands more and more sophisticated and precise techniques of the treatment of materials. We study the machining of the material with the high energy beams: the abrasive waterjet, the focused ion beam and the laser. Although the physics governing the energy beam interaction with material is very different for different application, we can use the same approach to the mathematical modeling of these processes.The evolution of the material surface under the energy beam impact is modeled by PDE equation. This equation contains a set of unknown parameters - the calibration parameters of the model. The unknown parameters can be identified by minimization of the cost function, i.e., function that describes the differ- ence between the result of modeling and the corresponding experimental data. As the modeled surface is a solution of the PDE problem, this minimization is an example of PDE-constrained optimization problem. The identification problem was regularized using Tikhonov regularization. The gradient of the cost function was obtained both by using the variational approach and by means of the automatic differentiation. Once the cost function and its gradient calculated, the minimization was performed using L-BFGS minimizer.For the abrasive waterjet application the problem of non-uniqueness of numerical solution is solved. The impact of the secondary effects non included into the model is avoided as well. The calibration procedure is validated on both synthetic and experimental data.For the laser application, we presented a simple criterion that allows to distinguish between the thermal and non-thermal laser ablation regimes
Chaari, Lotfi. "Problèmes de reconstruction en Imagerie par Résonance Magnétique parallèle à l'aide de représentations en ondelettes." Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00575136.
Full textAcheli, Dalila. "Application de la méthode des sentinelles à quelques problèmes inverses." Compiègne, 1997. http://www.theses.fr/1997COMP1034.
Full textThe aim of our work is the application of the sentinels method to salve two inverse problems. It concerns the parameters estimation of two given models using measures undertaken on the process. The framework being nonlinear, the estimation of the parameters is performed in an iterative manner by the tangent sentinels method. The first problem concerns the environment for which the parameters to be estimated are the coordinates of the pollution source trajectory in a river. The pollution phenomenon is governed by a PDE's system. The second problem studied, deals with the medical framework. The aim is to estimate the kinetic parameters in the enzymatic reaction. The model considered here is a differential equations system. First, we show the existence and the uniqueness of the system solution. Then, we study the stability of the solution using the Lyapounov function. Assuming certain hypotheses, the global identifiability of parameters based on the differential algebra and Taylor development is shown. We also give a detailed study of the observation sensitivity with respect to the model parameters. Ln order to check the efficiency of this method, some tests were clone on noised data. This method becomes deficient as the noise on measures becomes important. Ln this case the inverse problem is ill-posed because a perturbation in the data will implies an important change in the solution. Among the techniques employed to improve the conditionement of the problem, the Gauss-Newton iterative regularization technique remains inefficient. Therefore, we propose a new approach of regularization, called "iterative regularized Tichonov method". Some tests were conducted on different types of experiences in pharmacokinetic. They show that this approach is robust with respect to noise measures and allows good parameters identification
Zhang, Mo. "Vers une méthode de restauration aveugle d’images hyperspectrales." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S132.
Full textWe propose in this thesis manuscript to develop a blind restoration method of single component blurred and noisy images where no prior knowledge is required. This manuscript is composed of three chapters: the first chapter focuses on state-of-art works. The optimization approaches for resolving the restoration problem are discussed first. Then, the main methods of restoration, so-called semi-blind ones because requiring a minimum of a priori knowledge are analysed. Five of these methods are selected for evaluation. The second chapter is devoted to comparing the performance of the methods selected in the previous chapter. The main objective criteria for evaluating the quality of the restored images are presented. Of these criteria, the l1 norm for the estimation error is selected. The comparative study conducted on a database of monochromatic images, artificially degraded by two blurred functions with different support size and three levels of noise, revealed the most two relevant methods. The first one is based on a single-scale alternating approach where both the PSF and the image are estimated alternatively. The second one uses a multi-scale hybrid approach, which consists first of alternatingly estimating the PSF and a latent image, then in a sequential next step, restoring the image. In the comparative study performed, the benefit goes to the latter. The performance of both these methods will be used as references to then compare the newly designed method. The third chapter deals with the developed method. We have sought to make the hybrid approach retained in the previous chapter as blind as possible while improving the quality of estimation of both the PSF and the restored image. The contributions covers a number of points. A first series concerns the redefinition of the scales that of the initialization of the latent image at each scale level, the evolution of the parameters for the selection of the relevant contours supporting the estimation of the PSF and finally the definition of a blind stop criterion. A second series of contributions concentrates on the blind estimation of the two regularization parameters involved in order to avoid having to fix them empirically. Each parameter is associated with a separate cost function either for the PSF estimation or for the estimation of a latent image. In the sequential step that follows, we refine the estimation of the support of the PSF estimated in the previous alternated step, before exploiting it in the process of restoring the image. At this level, the only a priori knowledge necessary is a higher bound of the support of the PSF. The different evaluations performed on monochromatic and hyperspectral images artificially degraded by several motion-type blurs with different support sizes, show a clear improvement in the quality of restoration obtained by the newly designed method in comparison to the best two state-of-the-art methods retained
Song, Yingying. "Amélioration de la résolution spatiale d’une image hyperspectrale par déconvolution et séparation-déconvolution conjointes." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0207/document.
Full textA hyperspectral image is a 3D data cube in which every pixel provides local spectral information about a scene of interest across a large number of contiguous bands. The observed images may suffer from degradation due to the measuring device, resulting in a convolution or blurring of the images. Hyperspectral image deconvolution (HID) consists in removing the blurring to improve the spatial resolution of images at best. A Tikhonov-like HID criterion with non-negativity constraint is considered here. This method considers separable spatial and spectral regularization terms whose strength are controlled by two regularization parameters. First part of this thesis proposes the maximum curvature criterion MCC and the minimum distance criterion MDC to automatically estimate these regularization parameters by formulating the deconvolution problem as a multi-objective optimization problem. The second part of this thesis proposes the sliding block regularized (SBR-LMS) algorithm for the online deconvolution of hypserspectral images as provided by whiskbroom and pushbroom scanning systems. The proposed algorithm accounts for the convolution kernel non-causality and including non-quadratic regularization terms while maintaining a linear complexity compatible with real-time processing in industrial applications. The third part of this thesis proposes joint unmixing-deconvolution methods based on the Tikhonov criterion in both offline and online contexts. The non-negativity constraint is added to improve their performances
Bennani, Youssef. "Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4128/document.
Full textThis thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance
Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.
Full textMassive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Li, Xuhong. "Regularization schemes for transfer learning with convolutional networks." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2497/document.
Full textTransfer learning with deep convolutional neural networks significantly reduces the computation and data overhead of the training process and boosts the performance on the target task, compared to training from scratch. However, transfer learning with a deep network may cause the model to forget the knowledge acquired when learning the source task, leading to the so-called catastrophic forgetting. Since the efficiency of transfer learning derives from the knowledge acquired on the source task, this knowledge should be preserved during transfer. This thesis solves this problem of forgetting by proposing two regularization schemes that preserve the knowledge during transfer. First we investigate several forms of parameter regularization, all of which explicitly promote the similarity of the final solution with the initial model, based on the L1, L2, and Group-Lasso penalties. We also propose the variants that use Fisher information as a metric for measuring the importance of parameters. We validate these parameter regularization approaches on various tasks. The second regularization scheme is based on the theory of optimal transport, which enables to estimate the dissimilarity between two distributions. We benefit from optimal transport to penalize the deviations of high-level representations between the source and target task, with the same objective of preserving knowledge during transfer learning. With a mild increase in computation time during training, this novel regularization approach improves the performance of the target tasks, and yields higher accuracy on image classification tasks compared to parameter regularization approaches
Kervazo, Christophe. "Optimization framework for large-scale sparse blind source separation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS354/document.
Full textDuring the last decades, Blind Source Separation (BSS) has become a key analysis tool to study multi-valued data. The objective of this thesis is however to focus on large-scale settings, for which most classical algorithms fail. More specifically, it is subdivided into four sub-problems taking their roots around the large-scale sparse BSS issue: i) introduce a mathematically sound robust sparse BSS algorithm which does not require any relaunch (despite a difficult hyper-parameter choice); ii) introduce a method being able to maintain high quality separations even when a large-number of sources needs to be estimated; iii) make a classical sparse BSS algorithm scalable to large-scale datasets; and iv) an extension to the non-linear sparse BSS problem. The methods we propose are extensively tested on both simulated and realistic experiments to demonstrate their quality. In-depth interpretations of the results are proposed
Essebbar, Abderrahman. "Séparation paramétrique des ondes en sismique." Phd thesis, Grenoble INPG, 1992. http://tel.archives-ouvertes.fr/tel-00785644.
Full textNassif, Roula. "Estimation distribuée adaptative sur les réseaux multitâches." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4118/document.
Full textDistributed adaptive learning allows a collection of interconnected agents to perform parameterestimation tasks from streaming data by relying solely on local computations and interactions with immediate neighbors. Most prior literature on distributed inference is concerned with single-task problems, where agents with separable objective functions need to agree on a common parameter vector. However, many network applications require more complex models and flexible algorithms than single-task implementations since their agents involve the need to estimate and track multiple objectives simultaneously. Networks of this kind, where agents need to infer multiple parameter vectors, are referred to as multitask networks. Although agents may generally have distinct though related tasks to perform, they may still be able to capitalize on inductive transfer between them to improve their estimation accuracy. This thesis is intended to bring forth advances on distributed inference over multitask networks. First, we present the well-known diffusion LMS strategies to solve single-task estimation problems and we assess their performance when they are run in multitask environments in the presence of noisy communication links. An improved strategy allowing the agents to adapt their cooperation to neighbors sharing the same objective is presented in order to attain improved learningand estimation over networks. Next, we consider the multitask diffusion LMS strategy which has been proposed to solve multitask estimation problems where the network is decomposed into clusters of agents seeking different