Dissertations / Theses on the topic 'Algorithme du gradient stochastique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Algorithme du gradient stochastique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mercier, Quentin. "Optimisation multicritère sous incertitudes : un algorithme de descente stochastique." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4076/document.
Full textThis thesis deals with unconstrained multiobjective optimization when the objectives are written as expectations of random functions. The randomness is modelled through random variables and we consider that this does not impact the problem optimization variables. A descent algorithm is proposed which gives the Pareto solutions without having to estimate the expectancies. Using convex analysis results, it is possible to construct a common descent vector that is a descent vector for all the objectives simultaneously, for a given draw of the random variables. An iterative sequence is then built and consists in descending following this common descent vector calculated at the current point and for a single independent draw of the random variables. This construction avoids the costly estimation of the expectancies at each step of the algorithm. It is then possible to prove the mean square and almost sure convergence of the sequence towards Pareto solutions of the problem and at the same time, it is possible to obtain a speed rate result when the step size sequence is well chosen. After having proposed some numerical enhancements of the algorithm, it is tested on multiple test cases against two classical algorithms of the literature. The results for the three algorithms are then compared using two measures that have been devised for multiobjective optimization and analysed through performance profiles. Methods are then proposed to handle two types of constraint and are illustrated on mechanical structure optimization problems. The first method consists in penalising the objective functions using exact penalty functions when the constraint is deterministic. When the constraint is expressed as a probability, the constraint is replaced by an additional objective. The probability is then reformulated as an expectation of an indicator function and this new problem is solved using the algorithm proposed in the thesis without having to estimate the probability during the optimization process
Culioli, Jean-Christophe. "Algorithmes de decomposition/coordination en optimisation stochastique." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0059.
Full textPerrier, Alexis. "Développement et analyse de performance d'algorithmes de type gradient stochastique /." Paris : Ecole nationale supérieure des télécommunications, 1995. http://catalogue.bnf.fr/ark:/12148/cb35802687g.
Full textStrugarek, Cyrille. "Approches variationnelles et autres contributions en optimisation stochastique." Phd thesis, Ecole des Ponts ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00001848.
Full textEs-Sadek, Mohamed Zeriab. "Contribution à l'optimisation globale : approche déterministe et stochastique et application." Thesis, Rouen, INSA, 2009. http://www.theses.fr/2009ISAM0010/document.
Full textThis thesis concerns the global optimization of a non convex function under non linear restrictions, this problem cannot be solved using the classic deterministic methods like the projected gradient algorithm and the sqp method because they can solve only the convex problems. The stochastic algorithms like the genetic algorithm and the simulated annealing algorithm are also inefficients for solving this type of problems. For solving this kind of problems, we try to perturb stocasicly the deterministic classic method and to combine this perturbation with genetic algorithm and the simulated annealing. So we do the combination between the perturbed projected gradient and the genetic algorithm, the perturbed sqp method and the genetic algorithm, the perturbed projected gradient and the simulated annealing, the Piyavskii algorithm and the genetic algorithm. We applicate the coupled algorithms to different classic examples for concretited the thesis. For illustration in the real life, we applicate the coupled perturbed projected gradient end the genetic algorithm to logistic problem eventuelly transport. In this view, we sold the efficient practices
Dedieu, Hervé. "Etude des effets de quantification dans les algorithmes adaptatifs de type gradient stochastique." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37613018c.
Full textNguyen, Thanh Huy. "Heavy-tailed nature of stochastic gradient descent in deep learning : theoretical and empirical analysis." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT003.
Full textIn this thesis, we are concerned with the Stochastic Gradient Descent (SGD) algorithm. Specifically, we perform theoretical and empirical analysis of the behavior of the stochastic gradient noise (GN), which is defined as the difference between the true gradient and the stochastic gradient, in deep neural networks. Based on these results, we bring an alternative perspective to the existing approaches for investigating SGD. The GN in SGD is often considered to be Gaussian for mathematical convenience. This assumption enables SGD to be studied as a stochastic differential equation (SDE) driven by a Brownian motion. We argue that the Gaussianity assumption might fail to hold in deep learning settings and hence render the Brownian motion-based analyses inappropriate. Inspired by non-Gaussian natural phenomena, we consider the GN in a more general context that suggests that the GN is better approximated by a "heavy-tailed" alpha-stable random vector. Accordingly, we propose to analyze SGD as a discretization of an SDE driven by a Lévy motion. Firstly, to justify the alpha-stable assumption, we conduct experiments on common deep learning scenarios and show that in all settings, the GN is highly non-Gaussian and exhibits heavy-tails. Secondly, under the heavy-tailed GN assumption, we provide a non-asymptotic analysis for the discrete-time dynamics SGD to converge to the global minimum in terms of suboptimality. Finally, we investigate the metastability nature of the SDE driven by Lévy motion that can then be exploited for clarifying the behavior of SGD, especially in terms of `preferring wide minima'. More precisely, we provide formal theoretical analysis where we derive explicit conditions for the step-size such that the metastability behavior of SGD, viewed as a discrete-time SDE, is similar to its continuous-time limit. We show that the behaviors of the two systems are indeed similar for small step-sizes and we describe how the error depends on the algorithm and problem parameters. We illustrate our metastability results with simulations on a synthetic model and neural networks. Our results open up a different perspective and shed more light on the view that SGD prefers wide minima
Letournel, Marc. "Approches duales dans la résolution de problèmes stochastiques." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00938751.
Full textDoan, Thanh-Nghi. "Large scale support vector machines algorithms for visual classification." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S083/document.
Full textWe have proposed a novel method of combination multiple of different features for image classification. For large scale learning classifiers, we have developed the parallel versions of both state-of-the-art linear and nonlinear SVMs. We have also proposed a novel algorithm to extend stochastic gradient descent SVM for large scale learning. A class of large scale incremental SVM classifiers has been developed in order to perform classification tasks on large datasets with very large number of classes and training data can not fit into memory
Younes, Laurent. "Problèmes d'estimation paramétrique pour des champs de Gibbs Markoviens : applications au traitement d'images." Paris 11, 1988. http://www.theses.fr/1988PA112269.
Full textWe study parameter estimation by maximum of likelihood for Gibbs Markov Random Fields. We begin by a heuristic discussion about statistical analysis of pictures, to obtain a modelization by random fields, and by a summary of various parameter estimation technics that exist. Then, we recall some results related to Gibbs Fields, and to their statistical analysis we introduce the notion of potential, and recall existence and uniqueness conditions for an associated Gibbs field. Ln the next chapter, we present a stochastic gradient algorithm in order to maximize the Likelihood. It uses the Gibbs sampler, which is an iterative method for Markov field simulation. We give properties related to the ergodicity of this sampler. Finaly we recall some results of Métivier and Priouret about stochastic gradient algorithms, such as the one we use that allow measurement of the (trend for) convergence for this kind of procedures. Ln chapter 4, we make a precise study of the convergence of the algorithm. First, with fixed lattice, we deal with the case of exponential models and show almost sure convergence of the algorithm. We study then more general models, and especialy problems related to imperfect (or noisy) observa tions. Ln chapter 5, we study the asymptotic behaviour of the maximum of likelihood estimator, and prove consistency, and asymptotic normality. Eventualy we give some practical remarks on the estimation algorithm, followed by some experiments
Samé, Allou Badara. "Modèles de mélange et classification de données acoustiques en temps réel." Compiègne, 2004. http://www.theses.fr/2004COMP1540.
Full textThe motivation for this Phd Thesis was a real-time flaw diagnosis application for pressurized containers using acoustic emissions. It has been carried out in collaboration with the Centre Technique des Industries Mécaniques (CETIM). The aim was to improve LOTERE, a real-time computer-aided-decision software, which has been found to be too slow when the number of acoustic emissions becomes large. Two mixture model-based clustering approaches, taking into account time constraints, have been proposed. The first one consists in clustering 'bins' resulting from the conversion of original observations into an histogram. The second one is an on-line approach updating recursively the classification. An experimental study using both simulated and real data has shown that the proposed methods are very efficient
Godichon-Baggioni, Antoine. "Algorithmes stochastiques pour la statistique robuste en grande dimension." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS053/document.
Full textThis thesis focus on stochastic algorithms in high dimension as well as their application in robust statistics. In what follows, the expression high dimension may be used when the the size of the studied sample is large or when the variables we consider take values in high dimensional spaces (not necessarily finite). In order to analyze these kind of data, it can be interesting to consider algorithms which are fast, which do not need to store all the data, and which allow to update easily the estimates. In large sample of high dimensional data, outliers detection is often complicated. Nevertheless, these outliers, even if they are not many, can strongly disturb simple indicators like the mean and the covariance. We will focus on robust estimates, which are not too much sensitive to outliers.In a first part, we are interested in the recursive estimation of the geometric median, which is a robust indicator of location which can so be preferred to the mean when a part of the studied data is contaminated. For this purpose, we introduce a Robbins-Monro algorithm as well as its averaged version, before building non asymptotic confidence balls for these estimates, and exhibiting their $L^{p}$ and almost sure rates of convergence.In a second part, we focus on the estimation of the Median Covariation Matrix (MCM), which is a robust dispersion indicator linked to the geometric median. Furthermore, if the studied variable has a symmetric law, this indicator has the same eigenvectors as the covariance matrix. This last property represent a real interest to study the MCM, especially for Robust Principal Component Analysis. We so introduce a recursive algorithm which enables us to estimate simultaneously the geometric median, the MCM, and its $q$ main eigenvectors. We give, in a first time, the strong consistency of the estimators of the MCM, before exhibiting their rates of convergence in quadratic mean.In a third part, in the light of the work on the estimates of the median and of the Median Covariation Matrix, we exhibit the almost sure and $L^{p}$ rates of convergence of averaged stochastic gradient algorithms in Hilbert spaces, with less restrictive assumptions than in the literature. Then, two applications in robust statistics are given: estimation of the geometric quantiles and application in robust logistic regression.In the last part, we aim to fit a sphere on a noisy points cloud spread around a complete or truncated sphere. More precisely, we consider a random variable with a truncated spherical distribution, and we want to estimate its center as well as its radius. In this aim, we introduce a projected stochastic gradient algorithm and its averaged version. We establish the strong consistency of these estimators as well as their rates of convergence in quadratic mean. Finally, the asymptotic normality of the averaged algorithm is given
Mignacco, Francesca. "Statistical physics insights on the dynamics and generalisation of artificial neural networks." Thesis, université Paris-Saclay, 2022. http://www.theses.fr/2022UPASP074.
Full textMachine learning technologies have become ubiquitous in our daily lives. However, this field still remains largely empirical and its scientific stakes lack a deep theoretical understanding.This thesis explores the mechanisms underlying learning in artificial neural networks through the prism of statistical physics. In the first part, we focus on the static properties of learning problems, that we introduce in Chapter 1.1. In Chapter 1.2, we consider the prototype classification of a binary mixture of Gaussian clusters and we derive rigorous closed-form expressions for the errors in the infinite-dimensional regime, that we apply to shed light on the role of different problem parameters. In Chapter 1.3, we show how to extend the teacher-student perceptron model to encompass multi-class classification deriving asymptotic expressions for the optimal performance and the performance of regularised empirical risk minimisation. In the second part, we turn our focus to the dynamics of learning, that we introduce in Chapter 2.1. In Chapter 2.2, we show how to track analytically the training dynamics of multi-pass stochastic gradient descent (SGD) via dynamical mean-field theory for generic non convex loss functions and Gaussian mixture data. Chapter 2.3 presents a late-time analysis of the effective noise introduced by SGD in the underparametrised and overparametrised regimes. In Chapter 2.4, we take the sign retrieval problem as a benchmark highly non-convex optimisation problem and show that stochasticity is crucial to achieve perfect generalisation. The third part of the thesis contains the conclusions and some future perspectives
Flammarion, Nicolas. "Stochastic approximation and least-squares regression, with applications to machine learning." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE056/document.
Full textMany problems in machine learning are naturally cast as the minimization of a smooth function defined on a Euclidean space. For supervised learning, this includes least-squares regression and logistic regression. While small problems are efficiently solved by classical optimization algorithms, large-scale problems are typically solved with first-order techniques based on gradient descent. In this manuscript, we consider the particular case of the quadratic loss. In the first part, we are interestedin its minimization when its gradients are only accessible through a stochastic oracle. In the second part, we consider two applications of the quadratic loss in machine learning: clustering and estimation with shape constraints. In the first main contribution, we provided a unified framework for optimizing non-strongly convex quadratic functions, which encompasses accelerated gradient descent and averaged gradient descent. This new framework suggests an alternative algorithm that exhibits the positive behavior of both averaging and acceleration. The second main contribution aims at obtaining the optimal prediction error rates for least-squares regression, both in terms of dependence on the noise of the problem and of forgetting the initial conditions. Our new algorithm rests upon averaged accelerated gradient descent. The third main contribution deals with minimization of composite objective functions composed of the expectation of quadratic functions and a convex function. Weextend earlier results on least-squares regression to any regularizer and any geometry represented by a Bregman divergence. As a fourth contribution, we consider the the discriminative clustering framework. We propose its first theoretical analysis, a novel sparse extension, a natural extension for the multi-label scenario and an efficient iterative algorithm with better running-time complexity than existing methods. The fifth main contribution deals with the seriation problem. We propose a statistical approach to this problem where the matrix is observed with noise and study the corresponding minimax rate of estimation. We also suggest a computationally efficient estimator whose performance is studied both theoretically and experimentally
Dedieu, Hervé. "Étude des effets de quantification dans les algorithmes adaptatifs de type gradiant stochastique." Toulouse, INPT, 1988. http://www.theses.fr/1988INPT081H.
Full textAkata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM003/document.
Full textBuilding algorithms that classify images on a large scale is an essential task due to the difficulty in searching massive amount of unlabeled visual data available on the Internet. We aim at classifying images based on their content to simplify the manageability of such large-scale collections. Large-scale image classification is a difficult problem as datasets are large with respect to both the number of images and the number of classes. Some of these classes are fine grained and they may not contain any labeled representatives. In this thesis, we use state-of-the-art image representations and focus on efficient learning methods. Our contributions are (1) a benchmark of learning algorithms for large scale image classification, and (2) a novel learning algorithm based on label embedding for learning with scarce training data. Firstly, we propose a benchmark of learning algorithms for large scale image classification in the fully supervised setting. It compares several objective functions for learning linear classifiers such as one-vs-rest, multiclass, ranking and weighted average ranking using the stochastic gradient descent optimization. The output of this benchmark is a set of recommendations for large-scale learning. We experimentally show that, online learning is well suited for large-scale image classification. With simple data rebalancing, One-vs-Rest performs better than all other methods. Moreover, in online learning, using a small enough step size with respect to the learning rate is sufficient for state-of-the-art performance. Finally, regularization through early stopping results in fast training and a good generalization performance. Secondly, when dealing with thousands of classes, it is difficult to collect sufficient labeled training data for each class. For some classes we might not even have a single training example. We propose a novel algorithm for this zero-shot learning scenario. Our algorithm uses side information, such as attributes to embed classes in a Euclidean space. We also introduce a function to measure the compatibility between an image and a label. The parameters of this function are learned using a ranking objective. Our algorithm outperforms the state-of-the-art for zero-shot learning. It is flexible and can accommodate other sources of side information such as hierarchies. It also allows for a smooth transition from zero-shot to few-shots learning
Cénac, Peggy. "Récursivité au carrefour de la modélisation de séquences, des arbres aléatoires, des algorithmes stochastiques et des martingales." Habilitation à diriger des recherches, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00954528.
Full textCheng, Jianqiang. "Stochastic Combinatorial Optimization." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112261.
Full textIn this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs
Urban, Bernard. "Applications du maximum d'entropie et du gradient stochastique à la météorologie." Toulouse 3, 1996. http://www.theses.fr/1996TOU30083.
Full textHernandez, Freddy. "Fluctuations à l'équilibre d'un modèle stochastique non gradient qui conserve l'énergie." Paris 9, 2010. https://bu.dauphine.psl.eu/fileviewer/index.php?doc=2010PA090029.
Full textIn this thesis we study the equilibrium energy fluctuation field of a one-dimensional reversible non gradient model. We prove that the limit fluctuation process is governed by a generalized Ornstein-Uhlenbeck process. By adapting the non gradient method introduced by S. R. S Varadhan, we identify the correct diffusion term, which allows us to derive the Boltzmann-Gibbs principle. This is the key point to show that the energy fluctuation field converges in the sense of finite dimensional distributions to a generalized Ornstein-Uhlenbeck process. Moreover, using again the Boltzmann-Gibbs principle we also prove tightness for the energy fluctuation field in a specified Sobolev space, which together with the finite dimensional convergence implies the convergence in distribution to the generalized Ornstein-Uhlenbeck process mentioned above. The fact that the conserved quantity is not a linear functional of the coordinates of the system, introduces new difficulties of geometric nature in applying Varadhan's non gradient method
Massé, Pierre-Yves. "Autour De L'Usage des gradients en apprentissage statistique." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS568/document.
Full textWe prove a local convergence theorem for the classical dynamical system optimization algorithm called RTRL, in a nonlinear setting. The rtrl works on line, but maintains a huge amount of information, which makes it unfit to train even moderately big learning models. The NBT algorithm turns it by replacing these informations by a non-biased, low dimension, random approximation. We also prove the convergence with arbitrarily close to one probability, of this algorithm to the local optimum reached by the RTRL algorithm. We also formalize the LLR algorithm and conduct experiments on it, on synthetic data. This algorithm updates in an adaptive fashion the step size of a gradient descent, by conducting a gradient descent on this very step size. It therefore partially solves the issue of the numerical choice of a step size in a gradient descent. This choice influences strongly the descent and must otherwise be hand-picked by the user, following a potentially long research
Zerbinati, Adrien. "Algorithme à gradients multiples pour l'optimisation multiobjectif en simulation de haute fidélité : application à l'aérodynamique compressible." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00868031.
Full textMakhlouf, Azmi. "Régularité fractionnaire et analyse stochastique de discrétisations ; Algorithme adaptatif de simulation en risque de crédit." Phd thesis, Grenoble INPG, 2009. http://tel.archives-ouvertes.fr/tel-00460269.
Full textZeriab, Mohamed Zeriab. "Contribution à l'optimisation globale : approche déterministe et stochastique et application." Phd thesis, INSA de Rouen, 2009. http://tel.archives-ouvertes.fr/tel-00560887.
Full textTauvel, Claire. "Optimisation stochastique à grande échelle." Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00364777.
Full textSaadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.
Full textIn this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
Ben, Elhaj Salah Sami. "Modélisation non-locale et stochastique de matériaux à fort gradient de propriétés par développement asymptotique." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0018.
Full textThe aim is to propose a macroscopic, deterministic and non-local model, constructed by scale transition for heterogeneous materials with high property gradients and containing a random distribution of inclusions. More precisely, the inclusions are distributed in an elastic matrix according to a stochastic ergodic process. Several non-local models exist in the literature, but they do not allow (or very little) to obtain non-local quantities and/or fields at the macroscopic scale from a scale-transition. Besides, it is often difficult to link the non-local parameters to the microstructure. To this aim, we developed a two-step approach.In the first stage, we combined the method of asymptotic developments with an energetic approach to reveal a second displacement gradient in the strain energy. The advanced model involves three homogenized elasticity tensors functions of the stochastic parameter and of the phase properties. As opposed to the literature, the model involves two characteristic lengths strongly linked to the microstructure. These lengths define two morphological representative elementary volumes on which full field simulations are performed in order to determine the macroscopic strain tensors at orders 0 and 1 involved in the formulation of the model. In order to test this first version of the model, numerical simulations were performed. The estimate of the classical part of the energy, coming from the local part of the fields, has been successfully compared to classical bounds for a composite bar consisting of a random distribution of two homogeneous and isotropic elastic materials. Then, numerical solving of the whole model including the non-local terms has been performed in the three-dimensional case. Two types of microstructures with increasing morphological complexity were used. The first ones are virtual microstructures generated from a given simple pattern randomly distributed throughout the structure and composed of a big inclusion circled by six identical small ones. The second are real microstructures of Ethylène-Propylène-Diène Monomère (EPDM) obtained by tomography and containing clusters of inclusions with complex structures.In order to obtain a macroscopic model that can be used for structure analysis, without any full field intermediate calculations, a second scale transition has been performed using stochastic variational homogenization tools in the ergodic case. More precisely, the Γ-convergence method has been used in order to have a convergence of energy rather than that of mechanical fields, aiming at keeping a strong microstructural content. In fine, the model is macroscopic, non-local, deterministic and strongly connected to the microstructure. Non-local effects are now accounted for by the presence of the second displacement gradient but also by the presence of the virtual (memory) displacement field of the inclusions. The link with microstructure is still manifest through the presence of the stochastic parameter and phase properties, but also by the presence of the asymptotic fractions of the inclusion phase in the material and in each of the morphological volumes defined by the model characteristic lengths. In order to prepare the use of the model for structure calculations, a non-local finite element enriched with Hermit-type interpolations was implemented in FoXtroT, the finite element solver of the Pprime Institute. This element takes into account the virtual (memory) displacement field related to inclusions as well as the gradients of the macroscopic and virtual displacement fields. The first numerical results on this aspect, to our knowledge never discussed in the literature, are promising
Chatenet-Laurent, Nathalie. "Algorithme d'apprentissage de la politique optimale d'un processus stochastique : application à un réseau d'alimentation en eau potable." Bordeaux 1, 1997. http://www.theses.fr/1997BOR10540.
Full textChotin-Avot, Roselyne. "Architectures matérielles pour l'arithmétique stochastique discrète." Paris 6, 2003. http://hal.upmc.fr/tel-01267458.
Full textPanloup, Fabien. "Approximation récursive du régime stationnaire d'une équation différentielle stochastique avec sauts." Paris 6, 2006. http://www.theses.fr/2006PA066397.
Full textPanloup, Fabien. "Approximation récursive du régime stationnaire d'une Equation Differentielle Stochastique avec sauts." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00120508.
Full textd'Euler à pas décroissant, « exacts » ou « approchés », permettent de simuler efficacement la probabilité invariante mais également la loi globale d'un tel processus en régime stationnaire.
Ce travail possède des applications théoriques et pratiques diverses dont certaines
sont développées ici (TCL p.s. pour les lois stables, théorème limite relatif aux valeurs extrêmes, pricing d'options pour des modèles à volatilité stochastique stationnaire...).
Diabaté, Modibo. "Modélisation stochastique et estimation de la croissance tumorale." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM040.
Full textThis thesis is about mathematical modeling of cancer dynamics ; it is divided into two research projects.In the first project, we estimate the parameters of the deterministic limit of a stochastic process modeling the dynamics of melanoma (skin cancer) treated by immunotherapy. The estimation is carried out with a nonlinear mixed-effect statistical model and the SAEM algorithm, using real data of tumor size. With this mathematical model that fits the data well, we evaluate the relapse probability of melanoma (using the Importance Splitting algorithm), and we optimize the treatment protocol (doses and injection times).We propose in the second project, a likelihood approximation method based on an approximation of the Belief Propagation algorithm by the Expectation-Propagation algorithm, for a diffusion approximation of the melanoma stochastic model, noisily observed in a single individual. This diffusion approximation (defined by a stochastic differential equation) having no analytical solution, we approximate its solution by using an Euler method (after testing the Euler method on the Ornstein Uhlenbeck diffusion process). Moreover, a moment approximation method is used to manage the multidimensionality and the non-linearity of the melanoma mathematical model. With the likelihood approximation method, we tackle the problem of parameter estimation in Hidden Markov Models
Luu, Keurfon. "Optimisation numérique stochastique évolutionniste : application aux problèmes inverses de tomographie sismique." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM077/document.
Full textSeismic traveltime tomography is an ill-posed optimization problem due to the non-linear relationship between traveltime and velocity model. Besides, the solution is not unique as many models are able to explain the observed data. The non-linearity and non-uniqueness issues are typically addressed by using methods relying on Monte Carlo Markov Chain that thoroughly sample the model parameter space. However, these approaches cannot fully handle the computer resources provided by modern supercomputers. In this thesis, I propose to solve seismic traveltime tomography problems using evolutionary algorithms which are population-based stochastic optimization methods inspired by the natural evolution of species. They operate on concurrent individuals within a population that represent independent models, and evolve through stochastic processes characterizing the different mechanisms involved in natural evolution. Therefore, the models within a population can be intrinsically evaluated in parallel which makes evolutionary algorithms particularly adapted to the parallel architecture of supercomputers. More specifically, the works presented in this manuscript emphasize on the three most popular evolutionary algorithms, namely Differential Evolution, Particle Swarm Optimization and Covariance Matrix Adaptation - Evolution Strategy. The feasibility of evolutionary algorithms to solve seismic tomography problems is assessed using two different data sets: a real data set acquired in the context of hydraulic fracturing and a synthetic refraction data set generated using the Marmousi velocity model that presents a complex geology structure
Vu, Thi Thanh Xuan. "Optimisation déterministe et stochastique pour des problèmes de traitement d'images en grande dimension." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0540.
Full textIn this PhD thesis, we consider the problem of the Canonical Polyadic Decomposition (CPD) of potentially large $N$-th order tensors under different constraints (non-negativity, sparsity due to a possible overestimation of the tensor rank, etc.). To tackle such a problem, we propose three new iterative methods: a standard gradient based deterministic approach, a stochastic approach (memetic) and finally a proximal approach (Block-Coordinate Variable Metric Forward-Backward). The first approach extends J-P. Royer's works to the case of non-negative N-th order tensors. In the stochastic approach, genetic (memetic) methods are considered for the first time to solve the CPD problem. Their general principle is based on the evolution of a family of candidates. In the third type of approaches, a proximal algorithm namely the Block-Coordinate Variable Metric Forward-Backward is presented. The algorithm relies on two main steps: a gradient step and a proximal step. The blocks of coordinates naturally correspond to latent matrices. We propose a majorant function as well as a preconditioner with regard to each block. All methods are compared with other popular algorithms of the literature on synthetic (fluorescence spectroscopy like or random) data and on real experimental data corresponding to a water monitoring campaign aiming at detecting the appearance of pollutants
Gerzaguet, Robin. "Méthodes de traitement numérique du signal pour l'annulation d'auto-interférences dans un terminal mobile." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENT014/document.
Full textRadio frequency transceivers are now massively multi-standards, which meansthat several communication standards can cohabit in the same environment. As a consequence,analog components have to face critical design constraints to match the differentstandards requirements and self-interferences that are directly introduced by the architectureitself are more and more present and detrimental. This work exploits the dirty RFparadigm : we accept the signal to be polluted by self-interferences and we develop digitalsignal processing algorithms to mitigate those aforementioned pollutions and improve signalquality. We study here different self-interferences and propose baseband models anddigital adaptive algorithms for which we derive closed form formulae of both transientand asymptotic performance. We also propose an original adaptive step-size overlay toimprove transient performance of our method. We finally validate our approach on a systemon chip dedicated to cellular communications and on a software defined radio
Billaud, Yann. "Modélisation hybride stochastique-déterministe des incendies de forêts." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10100/document.
Full textMost of the area burned by forest fires is attributable to the few fires that escape initial attack to become large. As a consequence large-scale fires produce a large amount of green-house gases and particles which contribute to the global warming. Heterogeneous conditions of weather, fuel, and topography are generally encountered during the propagation of large fires. This shapes irregular contours and fractal post-fire patterns, as revealed by satellite maps. Among existing wildfire spread models, stochastic models seem to be good candidates for studying the erratic behavior of large fires, due to the above-mentioned heterogeneous conditions. The model we developed is a variant of the so-called small-world network model. Flame radiation and fuel piloted ignition are taken into account in a deterministic way at the macroscopic scale. The radiative interaction domain of a burning cell is determined from Monte Carlo simulation using the solid flame model. Some cases are studied, ranging from relatively simple to more complex geometries like an irregular flame fronts or an ethanol pool fire. Then, a numerical model is developed to investigate the piloted ignition of litters composed of maritime pine needles. A genetic algorithm is used to locate a set of model parameters that provide optimal agreement between the model predictions and the experimental data in terms of ignition time and mass loss. The model results had shown the importance of char surface oxidation for heat fluxes close to the critical flux for ignition. Finally, the small-world network model was used to simulate fire patterns in heterogeneous landscapes. Model validation was achieved to an acceptable degree in terms of contours, burned area and fractal properties, through comparison of results with data from a small controlled bushfire experiment and a historical Mediterranean fire. Therefore, it has been proven to be a powerful tool in the sizing of fortifications as fuel break areas at the wildland urban interface or in the understanding of atypical behavior in particular configurations (talweg, slope breaking, etc.). It has also been used for the optimization of an in-situ sensor network whose purpose is to detect precociously and to locate precisely small fires, preventing them from spreading and burning out of control. Our objective was to determine the minimum number and placement of sensors deployed in the forest
Tendeau, Frédéric. "Analyse syntaxique et sémantique avec évaluation d'attributs dans un demi-anneau : application à la linguistique calculatoire." Orléans, 1997. http://www.theses.fr/1997ORLE2064.
Full textHuang, Junbo. "Théorème de Berry-Esseen pour martingales normalisées et algorithmes stochastiques : application en contrôle stochastique." Paris 6, 2009. http://www.theses.fr/2009PA066174.
Full textSlaoui, Yousri. "Application des méthodes d'approximations stochastiques à l'estimation de la densité et de la régression." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00131964.
Full textPapa, Guillaume. "Méthode d'échantillonnage appliqué à la minimisation du risque empirique." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0005.
Full textIn this manuscript, we present and study applied sampling strategies, with problems related to statistical learning. The goal is to deal with the problems that usually arise in a context of large data when the number of observations and their dimensionality constrain the learning process. We therefore propose to address this problem using two sampling strategies: - Accelerate the learning process by sampling the most helpful. - Simplify the problem by discarding some observations to reduce complexity and the size of the problem. We first consider the context of the binary classification, when the observations used to form a classifier come from a sampling / survey scheme and present a complex dependency structure. for which we establish bounds of generalization. Then we study the implementation problem of stochastic gradient descent when observations are drawn non uniformly. We conclude this thesis by studying the problem of graph reconstruction for which we establish new theoretical results
Feoktistov, Vitaliy. "Contribution à l'évolution différentielle." Paris, ENMP, 2004. http://www.theses.fr/2004ENMP1248.
Full textFor the past few years the evolutionary computation domain has developed more and more rapidly. Differential Evolution (DE) is one of its representatives. Originally proposed for continuous unconstraint optimization, it has been enlarged into both mixed optimization and handling nonlinear constraints. Since then, DE becomes a leading method for a huge number of real problems. The main objective of this thesis was to propose an exhaustive analysis of this method : scientifically, to situate it with respect to the more classical concurrent methods and, technically, to improve its convergence rate as well as to increase its capacity to find the global optimum. For this purpose, we emphasize in the algorithm three hierarchical levels of improvement. At the first level, the individual behaviour has been investigated. A universal formula of the individual's variation is elaborated : it permits to accomplish the creation of a great number of strategies of exploration and exploitation of the search space. A generalization of the algorithm is realized by introducing transversal DE. At the second level, the entire population has been considered. In order to accelerate the convergence, the energetic selection principle has been developed and tested. This principle permits to retain the current best individuals. An interaction of the main algorithm with an external regression method has been realized in order to complete the selection by the most interesting individuals and to increase in fine the convergence rate. Concretely, the third level implements the interaction between DE and Support Vector Machine
DRIANCOURT, XAVIER. "Optimisation par descente de gradient stochastique de systemes modulaires combinant reseaux de neurones et programmation dynamique. Application a la reconnaissance de la parole." Paris 11, 1994. http://www.theses.fr/1994PA112203.
Full textLê, Cao Kim-Anh. "Outils statistiques pour la sélection de variables et l'intégration de données "omiques"." Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000225/.
Full textRecent advances in biotechnology allow the monitoring of large quantities of biological data of various types, such as genomics, proteomics, metabolomics, phenotypes. . . , that are often characterized by a small number of samples or observations. The aim of this thesis was to develop, or adapt, appropriate statistical methodologies to analyse highly dimensional data, and to present efficient tools to biologists for selecting the most biologically relevant variables. In the first part, we focus on microarray data in a classification framework, and on the selection of discriminative genes. In the second part, in the context of data integration, we focus on the selection of different types of variables with two-block omics data. Firstly, we propose a wrapper method, which agregates two classifiers (CART or SVM) to select discriminative genes for binary or multiclass biological conditions. Secondly, we develop a PLS variant called sparse PLS that adapts l1 penalization and allows for the selection of a subset of variables, which are measured from the same biological samples. Either a regression or canonical analysis frameworks are proposed to answer biological questions correctly. We assess each of the proposed approaches by comparing them to similar methods known in the literature on numerous real data sets. The statistical criteria that we use are often limited by the small number of samples. We always try, therefore, to combine statistical assessments with a thorough biological interpretation of the results. The approaches that we propose are easy to apply and give relevant results that answer the biologists needs
Phi, Tien Cuong. "Décomposition de Kalikow pour des processus de comptage à intensité stochastique." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4029.
Full textThe goal of this thesis is to construct algorithms which are able to simulate the activity of a neural network. The activity of the neural network can be modeled by the spike train of each neuron, which are represented by a multivariate point processes. Most of the known approaches to simulate point processes encounter difficulties when the underlying network is large.In this thesis, we propose new algorithms using a new type of Kalikow decomposition. In particular, we present an algorithm to simulate the behavior of one neuron embedded in an infinite neural network without simulating the whole network. We focus on mathematically proving that our algorithm returns the right point processes and on studying its stopping condition. Then, a constructive proof shows that this new decomposition holds for on various point processes.Finally, we propose algorithms, that can be parallelized and that enables us to simulate a hundred of thousand neurons in a complete interaction graph, on a laptop computer. Most notably, the complexity of this algorithm seems linear with respect to the number of neurons on simulation
VIALA, JEAN-RENAUD. "Apprentissage de reseaux de neurones en couches par la regle de retropropagation du gradient : developpement d'un algorithme competitif pour la compression et la segmentation d'images." Paris 6, 1990. http://www.theses.fr/1990PA066802.
Full textBaji, Bruno. "Quelques contributions à des systèmes dynamiques et algorithmes issus de la mécanique non-régulière et de l'optimisation." Montpellier 2, 2009. http://www.theses.fr/2009MON20007.
Full textMoulard, Laurence. "Optimisation de maillages non structurés : applications à la génération, à la correction et à l'adaptation." Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10173.
Full textUne étude théorique introduit de nouveaux objets, les tétraphores réalisables, en considérant les seules conditions topologiques d'un maillage. Ces objets se construisent facilement à partir de la frontière du domaine à mailler ; il suffit d'ajouter des contraintes géométriques, très simples à tester et pouvant se traduire sous la forme d'un critère à optimiser, pour obtenir un maillage. Des opérations transformant ces tétraphores sont définies. Les algorithmes d'optimisation sont ainsi bien plus efficaces car ils peuvent être appliques sur un ensemble plus vaste que les maillages
Les algorithmes décrits dans cette thèse sont utilisés industriellement. Des résultats sont donnes pour l'optimisation selon des critères géométriques et topologiques, l'adaptation selon un critère de densité, la correction après déformation des frontières et la génération de maillages
Loulidi, Sanae. "Modélisation stochastique en finance, application à la construction d’un modèle à changement de régime avec des sauts." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13675/document.
Full textAbstract
Allaya, Mouhamad M. "Méthodes de Monte-Carlo EM et approximations particulaires : application à la calibration d'un modèle de volatilité stochastique." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010072/document.
Full textThis thesis pursues a double perspective in the joint use of sequential Monte Carlo methods (SMC) and the Expectation-Maximization algorithm (EM) under hidden Markov models having a Markov dependence structure of order grater than one in the unobserved component signal. Firstly, we begin with a brief description of the theoretical basis of both statistical concepts through Chapters 1 and 2 that are devoted. In a second hand, we focus on the simultaneous implementation of both concepts in Chapter 3 in the usual setting where the dependence structure is of order 1. The contribution of SMC methods in this work lies in their ability to effectively approximate any bounded conditional functional in particular, those of filtering and smoothing quantities in a non-linear and non-Gaussian settings. The EM algorithm is itself motivated by the presence of both observable and unobservable ( or partially observed) variables in Hidden Markov Models and particularly the stochastic volatility models in study. Having presented the EM algorithm as well as the SMC methods and some of their properties in Chapters 1 and 2 respectively, we illustrate these two statistical tools through the calibration of a stochastic volatility model. This application is clone for exchange rates and for some stock indexes in Chapter 3. We conclude this chapter on a slight departure from canonical stochastic volatility model as well Monte Carlo simulations on the resulting model. Finally, we strive in Chapters 4 and 5 to provide the theoretical and practical foundation of sequential Monte Carlo methods extension including particle filtering and smoothing when the Markov structure is more pronounced. As an illustration, we give the example of a degenerate stochastic volatility model whose approximation has such a dependence property
Riff-Rojas, Maria-Cristina. "Résolution de problèmes de satisfaction de contraintes avec des algorithmes évolutionnistes." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1997. http://tel.archives-ouvertes.fr/tel-00523169.
Full text