Dissertations / Theses on the topic 'Méthode des moindres carrés'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Méthode des moindres carrés.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mir, Youness. "Approximation par la méthode des moindres carrés." Mémoire, Université de Sherbrooke, 2007. http://savoirs.usherbrooke.ca/handle/11143/4753.
Full textRiabi, Mohamed Lahdi. "Contribution à l'étude des structures planaires microondes par la méthode des moindres carrés." Toulouse, INPT, 1992. http://www.theses.fr/1992INPT103H.
Full textBenzina, Hafed. "Application de la méthode des moindres carrés à l'analyse de structures planaires non rayonnantes." Toulouse, INPT, 1990. http://www.theses.fr/1990INPT055H.
Full textSmolders, André. "Etude d'algorithmes de détection de ruptures de modelés et de prédiction adaptative : application à un problème de poursuite de mobiles manoeuvrants." Nice, 1986. http://www.theses.fr/1986NICE4010.
Full textVarin, Élisabeth. "Résolution de l'équation de transport neutronique par une méthode de moindres carrés en trois dimensions." Châtenay-Malabry, Ecole centrale de Paris, 2001. http://www.theses.fr/2001ECAP0717.
Full textThe transport equation describes the neutron density in space and time, along velocity directions. In three dimensions, classical deterministic methods of resolution have each their limitations for certain physical media like in diffusive region or for void material. The different approaches are combined to obtain the solution in every media. The work presented in this thesis show the implementation in three dimensions of a least square method applied to the scaled transport equation. This method originally developed by K. J. Ressel can be applied directly to all materials types. The study in diffusive regimes harmonics expansion PN and a finite element discretisation in space lead to the writing of a system of linear equations, forming a symmetric matrix. The discretisation error is obtained and bounded. The system matrix property allows the use of robust numerical methods like the conjugate gradient method. Boundary conditions are taken care of by a low order approach. They are developed for PN expansion in 3D and for complex arithmetic. Three dimensional discrete equations are obtained for hexaedral and tetraedral elements. The complete implementation of the method is explained to develop the code ARTEMIS. In this code, the expansion order PN can be changed depending on the region allowing to decrease the total number of unknowns. Simple tests show the correct implementation of the approach. Multigroup anisotropic extensions are also validated in three dimensions. Results for one of the Takeda benchmarks of OECD, used as a realistic reference case, are presented
Trigeassou, Jean-Claude. "Contribution à l'extension de la méthode des moments en automatique : application à l'identification des systèmes linéaires." Poitiers, 1987. http://www.theses.fr/1987POIT2005.
Full textMeziane-El, May Hédia. "Extension de la régression classique à des problèmes typologiques et présentation de la "méthode des tranches de densité" : une approche basée sur la percolation." Aix-Marseille 3, 1991. http://www.theses.fr/1991AIX32000.
Full textWe are about to tackle hereafter the problem of generalizing the regression concept for the purpose of taking into account the data's structural aspects. The resolutions of this matter of fact are mostly overlooked although taking them into consideration would allow a better understanding of a great number of phenomenons and would lead to the establishment of more adequate models. In addition ot the outliers, some other fundamental factors may undermine as well the quality of the models. These factors are structural in nature. Therefore our objctive is to show that from the same set of data, it is possible to search automatically one or even several resulting models. We propose, within this thesis, a method of resoulution that should be both simple and efficient: "the method of density slices", which aim is to find underlying "multimodels" when one is handling heterogeneous data. This method treis to synthesize regression and classification techniques. The non-hierarchical classification of data must be guided by the "percolation principle" and has to be realized simultaneously with the computation of the regression hyperplans. The percolation principle which aims to find the strong density points, has to be applied to slices of points and no longer to individual points. The basis of this method are discussed herewith, an algorithm and some results are presented as well
Basmaji, Riad. "Etude des réseaux géodésiques réguliers : matrice de covariance des réseaux réguliers finis de type nivellement : application de la méthode de la transformée de Fourier à l'analyse : application de la méthode de la transformée de Fourier à l'analyse des réseaux réguliers infinis à une et deux variables." Observatoire de Paris, 1989. https://tel.archives-ouvertes.fr/tel-01958574.
Full textThouzé, Arsène. "Méthode numérique d'estimation du mouvement des masses molles." Thèse, Poitiers, 2013. http://hdl.handle.net/1866/10763.
Full textBiomechanical analysis of human movement using optoelectronic system and skin markers considers body segments as rigid bodies. However the soft tissue motion relative to the bone, including muscles, fat mass, results in relative displacement of markers. This displacement is the results of two components, an own component which corresponds to a random motion of each marker and an in-unison component corresponding to the common movement of skin markers resulting from the movement of the underlying wobbling mass. While most studies aim to minimize these displacements, computer simulation models have shown that the movement of the soft tissue motion relative to the bones reduces the joint kinetics. This observation is only available using computer simulations because there are no methods able to distinguish the kinematics of wobbling mass of the bones kinematics. The main objective of this thesis is to develop a numerical method able to distinguish this different kinematics. The first aim of this thesis was to assess a local optimisation method for estimating the soft tissue motion using intra-cortical pins screwed into the humerus in three subjects. The results show that local optimisation underestimates of 50% the marker displacements. Also it leads to a different marker ranking in terms of displacement. The limit of local optimisation comes from the fact that it does not consider all the components of the soft tissue motion, especially the in-unison component. The second aim of this thesis was to develop a numerical method that accounts for all the component of the soft tissue motion. More specifically, this method should provide similar kinematics and estimate large marker displacement and distinguish the two components to conventional approaches. The lower limb is modeled using a 10 degree of freedom chain model reconstructed using global optimisation and the markers placed only on the pelvis and the medial face of the shank. The original estimate of joint kinematics without considering the markers placed on the thigh and on the calf avoids the influences of these markers displacement on the kinematic model reconstruction. This method was tested on 13 subjects who performed hopping trials and obtained up to 2.1 times of marker displacement depending the method considered ensuring similar joint-kinematics. A vector approach shown that marker displacements is more induce by the in-unison component. A matrix approach combining the local optimisation and the kinematic model shown that the wobbling mass moves around the longitudinal axis and along the antero-posterior axis of the bone. The originality of this thesis is to numerically distinguish the bone kinematics from the wobbling mass kinematics and the two components of the soft tissue motion. The methods developed in this thesis increases the knowledge on soft tissue motion and allow future studies to consider their movement in joint kinetics calculation.
Pujol, Serge. "Contribution à l'étude de circuits planaires micro-ondes par une méthode intégrale en utilisant différents types de fonctions d'essai." Toulouse, INPT, 1992. http://www.theses.fr/1992INPT070H.
Full textBoussouis, Mohamed. "Contribution à la résolution des problèmes aux limites dans les dispositifs micro-ondes par la méthode des moindres carrés." Toulouse, INPT, 1987. http://www.theses.fr/1987INPT073H.
Full textDehay, Guy. "Application des moindres carrés récursifs à l'estimation en temps réel des paramètres de la machine asynchrone." Compiègne, 1996. http://www.theses.fr/1996COMP9425.
Full textHiernard, Erwan. "Méthodes d'éléments finis et moindres carrés pour la résolution des équations de Navier-Stokes." Paris 11, 2003. http://www.theses.fr/2003PA112071.
Full textIn this work, we solve the tridimensional Navier-Stokes equations in the velocity-vorticity-pressure formulation by finite element methods and least-squares formulation. This kind of method is characterized by the fact that the L. B. B. Condition is not needed or is trivialy verified. We present two methods. The first one derives directly from the least-squares finite element method (LSFEM). But the weak formulation is obtained using a Petrov-Galerkin method. We obtain non squared linear systems, so they are solved using least-squares techniques. As others LSFEM methods, we don't have to verify the L. B. B. Condition. The implementation of this method is very easy but we have the same restrictions of the LSFEM. We prove that the method is convergent and error estimates are obtained. In the second method, we use the Whitney's finite element spaces in which each operator and equation can be properly expressed. Moreover, in this method, the linear systems are squared and can be solved directly. We also prove convergence of this method and error estimates. The L. B. B. Condition is directly verified. Numerical comparaisons between these two methods show that the second one gives better results in terms of accuracy but with more computer cost
Bergou, El Houcine. "Méthodes numériques pour les problèmes des moindres carrés, avec application à l'assimilation de données." Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0114/document.
Full textThe Levenberg-Marquardt algorithm (LM) is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this thesis the extension of the LM algorithm to the scenarios where the linearized least squares subproblems, of the form min||Ax - b ||^2, are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally and almost surely to a first order stationary point. Our approach is applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble Kalman smoother (EnKS). A convergence proof in L^p of EnKS in the limit for large ensembles to the Kalman smoother is given. We also show the convergence of LM-EnKS approach, which is a variant of the LM algorithm with EnKS as a linear solver, to the classical LM algorithm where the linearized subproblem is solved exactly. The sensitivity of the trucated sigular value decomposition method to solve the linearized subprobems is studied. We formulate an explicit expression for the condition number of the truncated least squares solution. This expression is given in terms of the singular values of A and the Fourier coefficients of b
Georges, George. "Algorithmes de calcul de positions GNSS basés sur les méthodes des moindres carrés avancées." Thesis, Belfort-Montbéliard, 2016. http://www.theses.fr/2016BELF0298/document.
Full textIn this thesis, the neural approach TLS EXIN was proposed in a new way in order to estimate the position of a GPSreceiver. The general idea of this approach is to develop a more robust method for calculating the position.The ¿pseudorange alone¿ method is one of the simplest techniques and most widely used for estimating the GPSpositioning and it results in solving an overdetermined system of linear equations Ax¿b. In general, the ordinaryleast squares (OLS) and weighted least squares (WLS) are the commonly used methods to estimate the position ofa receiver for their quickness and robustness, but the particular structure of the data matrix A and the noise affectingits entries are not considered. This thesis, instead, aims to address these problems and study the behavior of leastsquares (LS) methods in the presence of a noisy data matrix A.The approach of total least squares (TLS) takes into account the noise in A and b. It is a less robust technique thanOLS and more sensitive to data changes. It is general solved as a direct method. Instead, the neural network TLSEXIN, which is an iterative algorithm (gradient flow) for solving TLS, works better both because it may exploit theinitial condition information derived from the previous epochs, and, in case of null initial conditions, yields anaccurate estimate even in case of close-to-degenerate data matrix.To perform tests between different methods of least squares, two sets of data were collected. The first one comesfrom the TERIA network and includes data collected from different reference stations located throughout France.While the second one is the result of a measurement campaign using GPS (Ublox NL-6002U).Using the real data, a low condition number has been estimated: in this case all LS methods yield equivalentestimates: as a consequence, OLS and WLS are to be preferred for their low computational cost. However, theworst scenario has been investigated which may occur in case of a distant satellite, resulting in the ill-conditioning ofthe GPS problem. This extreme situation justifies the use of the TLS EXIN neural network. The results obtainedconfirm of this approach even for very large condition numbers
Settineri, Roch. "Etude et developpement d'algorithmes de filtrage adaptatif en treillis. Application a l'identification de modeles arma." Nice, 1991. http://www.theses.fr/1991NICE4493.
Full textBa, Amadou-Abdoulaye. "Contribution à la surveillance d'un processus de forage pétrolier." Paris, ENSAM, 2010. http://www.theses.fr/2010ENAM0007.
Full textDuring the PhD thesis our works have been directed towards identification approaches serving as tools for on-line monitoring procedures dedicated particularly to oilfield drilling processes. Several approaches have been proposed. The role of the first proposed method denoted by GVFF-RLS (Gradient Variable Forgetting Factor Recursive Least Square) is to reduce the transient stage duration by providing a rapid convergence of the FF-RLS. Consequently, this approach allows fast fault detections. Then, an extension of the GVFF-RLS named SALR-GVFF-RLS has been developed. Note that, the SALR stands for (Stochastic Adaptive Learning Rate) and its specificity is to accelerate the GVFF-RLS convergence by rendering the learning rate adaptive. The SALR-GVFF-RLS provides better performances than GVFF-RLS do. In order to ensure the stability of SALR-GVFF-RLS, we have developed an approach giving the maximum value of the learning rate and leaded to a new algorithm called LST-GVFF-RLS where the LST accounts for (Lyapunov Stability Theory). The LST-GVFF-RLS, besides providing a fast convergence it guarantees the stability of the algorithm. In order to take into account the particularity of a drilling process we have explored approaches using Sequential Monte Carlo methods and we have used one of their variants (RBPF). Then, we have emphasized its possibility of exploitation for fault detection strategies. These approaches have been tested on data bases obtained from field tests and highlighted interesting performances in terms of fast and reliable fault detection of bit balling
Dupret, Yoan. "Modélisation des dispersions des performances des cellules analogiques intégrées en fonction des dispersions du processus de fabrication." Paris 11, 2005. http://www.theses.fr/2005PA112106.
Full textRésumé anglais
Zézé, Djédjé Sylvain. "Calcul de fonctions de forme de haut degré par une technique de perturbation." Thesis, Metz, 2009. http://www.theses.fr/2009METZ056S.
Full textMost problems of physics and mechanics lead to partial differential equations. The many methods that exist are relatively low degree. In this thesis, we propose a method of very high degree. Our idea is to increase the order of interpolation function via a perturbation technique to avoid or reduce the difficulties caused by the high cost operations such as integrations. In dimension 1, the proposed technique is close to the P-version finite elements. At a basic level, approximates the solution by a power series of order p. In the case of a linear equation of order 2, the local resolution can build an element of degree, with two degrees of freedom per element. For nonlinear problems, a linearization of the problem by Newton's method is needed. Tests involving linear and nonlinear equations were used to validate the method and show that the technique has a similar convergence in the p-version finite elements. In dimension 2, the problem is discretized through reorganizing polynomials in homogeneous polynomials of degree k. After a definition of variables called principal and secondary combined with a vertical scanning field, the problem becomes a series of 1D problem. A collocation technique allows to take into account the boundary conditions and coupling conditions and determine the solution of the problem. The collocation technique coupled with the least-squares enabled to improve the initial results and has made more robust the perturbation technique
Da, Cunha Joao Paulo. "Diagnostic thermique de la machine à courant continu par identification paramétrique." Poitiers, 1999. http://www.theses.fr/1999POIT2353.
Full textSudarno, Wiharjo. "Amélioration des méthodes d'identification de type moindres carrés appliquées à la commande adaptative et à la reconnaissance de formes." Toulouse 3, 1996. http://www.theses.fr/1996TOU30048.
Full textDemeestere, Pierre. "Méthode des sentinelles : étude comparative et application à l'identification d'une frontière : contrôle du temps d'explosion d'une équation de combustion." Compiègne, 1997. http://www.theses.fr/1997COMP1030.
Full textBruneau, Charles-Henri. "Résolution des équations d'Euler compressibles par une méthode variationnelle en éléments finis : schémas aux différences décentrées pour la résolution des équations de Navier-Stokes incompressibles." Paris 11, 1989. http://www.theses.fr/1989PA112381.
Full textThis work is divided into two parts, which both concern the solution of problems in fluid dynamics : compressible Euler flows in two and three dimensions and incompressible Navier-Stokes fiows in two dimensions. The methods of solution are very different from one application to another. In the first case the solution of various problems is obtained by a fixed point algorithm, Newton linearization, a !east-squares minimization and a finite element approximation of the conservative variables ; moreover an artificial viscosity term or an entropy corrector with a mesh adaptation are required to get transonic or hypersonic solutions. In the second case a finite differences scheme with an antidiffusion term is used to approximate the Navier-Stokes operator and the solution is achieved by means of the multigrid method coupled to a cell by cell relaxation smoother. In addition to efficient methods, a structured programming including optimization and vectorization yields good performances in terms of CPU time. The work of this thesis gives an out look on a broad range of knowledge and techniques developed over the last few years in numerical analysis
Nion, Dimitri. "Méthodes PARAFAC généralisées pour l'extraction aveugle de sources : Application aux systèmes DS-CDMA." Cergy-Pontoise, 2007. http://www.theses.fr/2007CERG0322.
Full textThe goal of this PhD Thesis is to develop generalized PARAFAC decompositions for blind source extraction in a multi-user DS-CDMA system. The spatial, temporal and code diversities allow to store the samples of the received signal in a third order tensor. In order to blindly estimate the symbols of each user, we decompose this tensor in a sum of users' contributions. Until now, this multilinear approach has been used in wireless communications for instantaneous channels, in which case the problem is solved by the PARAFAC decomposition. However, in multipath channels with Inter-Symbol-Interference, this decomposition has to be generalized. The main contribution of this Thesis is to build tensor decompositions more general than PARAFAC to take this propagation scenario into account
Rachedi, Fatiha. "Estimateurs cribles des processus autorégressifs Banachiques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2005. http://tel.archives-ouvertes.fr/tel-00012194.
Full textde représenter des processus à temps continu. Nous considérons
l'estimation de l'opérateur d'autocorrelation d'un ARB(1). Les
méthodes classiques d'estimation (maximum de vraisemblance et
moindres carrées) s'avèrent inadéquates quand l'espace
paramétrique est de dimension infinie, Grenander (1983} a proposé
d'estimer le paramètre sur un sous espace de dimension en général
finie m, puis d'étudier la consistance de cet estimateur lorsque
la dimension m tend vers l'infini avec le nombres d'observations
à vitesse convenable. Cette méthode est dite méthode des cribles.
Notons que plus généralement il serait possible d'utiliser la
méthode des f-divergences. Nous définissons la méthode des
moindres carrées comme problème d'optimisation dans un espace de
Banach dans le cas ou l'opérateur est p-sommable,
p>1. Nous montrons la convergence de l'estimateur
crible et sa normalité asymptotique dans le cas d'un opérateur est
strictement -intégral. Nous utilisons la représentation duale
de la f-divergence pour définir l'estimateur du minimum des
f-divergences. Nous nous limitons ici à l'étude de
l'estimateur dit du minimum de KL-divergence (divergence de
Kullback-Leibler). Cet estimateur est celui
du maximum de vraisemblance. Nous montrons par la suite qu'il
converge presque surement vers la vraie valeur du paramètre
pour la norme des opérateurs p-sommables. La démonstration est
basée sur les techniques de Geman et Hwang (1982), utilisées pour
des observations indépendantes et identiquement distribuées, qu'on
a adapté au cas autorégressif.
Terzakis, Demètre. "Estimateurs à rétrécisseurs (cas de distributions normales à variance connue à un facteur près) : contrôle de l'emploi de la différence entre l'observation et l'estimation des moindres carrés." Rouen, 1987. http://www.theses.fr/1987ROUES015.
Full textBeau, Noëlle. "Application du maximum de vraisemblance à la comparaison de deux méthodes de dosage biologique : cas des variances inconnues." Paris 11, 1990. http://www.theses.fr/1990PA11P171.
Full textKhereddine, R. "Méthode adaptative de contrôle logique et de test de circuits AMS/RF." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00656920.
Full textBarotto, Béatrice. "Introduction de paramètres stochastiques pour améliorer l'estimation des trajectoires d'un système dynamique par une méthode de moindres carrés : application à la détermination de l'orbite d'un satellite avec une précision centimétrique." Toulouse 3, 1995. http://www.theses.fr/1995TOU30196.
Full textLebreux, Marc. "Contrôle de l'épaisseur de gelée dans les réacteurs métallurgiques à haute température à l'aide d'un capteur virtuel." Thèse, Université de Sherbrooke, 2011. http://savoirs.usherbrooke.ca/handle/11143/1966.
Full textMartin, Marie-Laure. "Données de survie." Paris 11, 2001. http://www.theses.fr/2001PA112335.
Full textWe consider two statistical problems arising during the estimation of the hazard function of cancer death in Hiroshima. The first problem is the estimation of the hazard function when the covariate is mismeasured. In Chapter 2, only grouped data are available, and the mismeasurement of the covariate is modeled as a misclassification. An easily implemented estimation procedure based on a generalization of the least squares method is devised for estimating simultaneously the parameters of the hazard function and the misclassification probabilities. The procedure is applied for taking into account the mismeasurement of the dose of radiation in the estimation of the hazard function of solid cancer death in Hiroshima. In Chapter 3 available data are individual data. We consider a model of excess relative risk, and we assume that the covariate is measured with a Gaussian additive error. We propose an estimation criterion based on the partial log-likelihood, and we show that the estimator obtained by maximization of this criterion is consistent and asymptotically Gaussian. Our result extends to other polynomial regression functions, to the Cox model and to the log-normal error model. The second problem is the non-parametric estimation of the hazard function. We consider the model of excess relative and absolute risk and propose a non-parametric estimation of the effect of the covariate using a model selection procedure, when available data are stratified data. We approximate the function of the covariate by a collection of spline functions, and select the best one according to Akaike Information Criterion. By the same way we choose which model between the model of excess relative risk or excess absolute risk fits the best the data. We apply our method for estimating the solid cancer and leukemia death hazard functions in Hiroshima
Huang, Wenjia. "Direct Mass Measurements and Global Evaluation of Atomic Masses." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS151/document.
Full textThe Atomic Mass Evaluation (AME), started in the 1960s, is the most reliable source for comprehensive information related to atomic masses. It provides the best values for the atomic masses and their associated uncertainties by evaluating experimental data from decay, reactions, and mass spectrometry. In this thesis, the philosophy and the most important features of the Ame will be discussed in detail. The most recent developments of the latest mass table (AME2016), such as molecular binding energy, energy correction of the implantation measurements, and the relativistic formula for the alpha-decay process, will be presented. Another part of this thesis concerns the data analysis from the Penning-trap spectrometer ISOLTRAP at ISOLDE/CERN. The new results are included in the global adjustment and their influences on the existing masses are discussed. The last part of this thesis is related to the systematic error studies of the ISOLTRAP multi-reflection time-of-flight mass spectrometer, using an off-line ion source and the on-line proton beam. From the analysis of the selected measurements, I found that the systematic error is much smaller than the statistical uncertainties obtained up to now
Couillaud, Julien. "Formation d'image : estimation du champ lumineux et matrice de filtres couleurs." Mémoire, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/5987.
Full textNudo, Frederico. "Approximations polynomiales et méthode des éléments finis enrichis, avec applications." Electronic Thesis or Diss., Pau, 2024. http://www.theses.fr/2024PAUU3067.
Full textA very common problem in computational science is the determination of an approximation, in a fixed interval, of a function whose evaluations are known only on a finite set of points. A common approach to solving this problem relies on polynomial interpolation, which consists of determining a polynomial that coincides with the function at the given points. A case of great practical interest is the case where these points follow an equispaced distribution within the considered interval. In these hypotheses, a problem related to polynomial interpolation is the Runge phenomenon, which consists in increasing the magnitude of the interpolation error close to the ends of the interval. In 2009, J. Boyd and F. Xu demonstrated that the Runge phenomenon could be eliminated by interpolating the function only on a proper subset formed by nodes closest to the Chebyshev-Lobatto nodes, the so called mock-Chebyshev nodes.However, this strategy involves not using almost all available data. In order to improve the accuracy of the method proposed by Boyd and Xu, while making full use of the available data, S. De Marchi, F. Dell'Accio, and M. Mazza introduced a new technique known as the constrained mock-Chebyshev least squares approximation. In this method, the role of the nodal polynomial, essential for ensuring interpolation at mock-Chebyshev nodes, is crucial. Its extension to the bivariate case, however, requires alternative approaches. The recently developed procedure by F. Dell'Accio, F. Di Tommaso, and F. Nudo, employing the Lagrange multipliers method, also enables the definition of the constrained mock-Chebyshev least squares approximation on a uniform grid of points. This innovative technique, equivalent to the previously introduced univariate method in analytical terms, also proves to be more accurate in numerical terms. The first part of the thesis is dedicated to the study of this new technique and its application to numerical quadrature and differentiation problems.In the second part of this thesis, we focus on the development of a unified and general framework for the enrichment of the standard triangular linear finite element in two dimensions and the standard simplicial linear finite element in higher dimensions. The finite element method is a widely adopted approach for numerically solving partial differential equations arising in engineering and mathematical modeling [55]. Its popularity is partly attributed to its versatility in handling various geometric shapes. However, the approximations produced by this method often prove ineffective in solving problems with singularities. To overcome this issue, various approaches have been proposed, with one of the most famous relying on the enrichment of the finite element approximation space by adding suitable enrichment functions. One of the simplest finite elements is the standard linear triangular element, widely used in applications. In this thesis, we introduce a polynomial enrichment of the standard triangular linear finite element and use this new finite element to introduce an improvement of the triangular Shepard operator. Subsequently, we introduce a new class of finite elements by enriching the standard triangular linear finite element with enrichment functions that are not necessarily polynomials, which satisfy the vanishing condition at the vertices of the triangle.Later on, we generalize the results presented in the two-dimensional case to the case of the standard simplicial linear finite element, also using enrichment functions that do not satisfy the vanishing condition at the vertices of the simplex.Finally, we apply these new enrichment strategies to extend the enrichment of the simplicial vector linear finite element developed by Bernardi and Raugel
Lu, Zhiping. "Analyse des processus longue mémoire stationnaires et non-stationnaires : estimations, applications et prévisions." Phd thesis, Cachan, Ecole normale supérieure, 2009. https://theses.hal.science/tel-00422376/fr/.
Full textIn this thesis, we consider two classes of long memory processes: the stationary long memory processes and the non-stationary long memory processes. We are devoted to the study of their probabilistic properties, estimation methods, forecast methods and the statistical tests. Stationary long memory processes have been extensively studied over the past decades. It has been shown that some long memory processes have the properties of self-similarity, which are important for parameter estimation. We review the self-similar properties of continuous-time and discrete-time long memory processes. We establish the propositions that stationary long memory process is asymptotically second-order self-similar, while stationary short memory process is not asymptotically second-order self-similar. Then we extend the results to specific long memory processes such as k-factor GARMA processes and k-factor GIGARCH processes. We also investigate the self-similar properties of some heteroscedastic models and the processes with switches and jumps. We make a review for the stationary long memory processes’ parameter estimation methods, including the parametric methods (for example, maximum likelihood estimation, approximate maximum likelihood estimation) and the semiparametric methods (for example, GPH method, Whittle method, Robinson method). The consistency and asymptotic normality behaviors are also investigated for the estimators. Testing the fractionally integrated order of seasonal and non-seasonal unit roots of the stochastic stationary long memory process is quite important for the economic and financial time series modeling. The widely used Robinson test (1994) is applied to various well-known long memory models. Via Monte Carlo experiments, we study and compare the performances of this test using several sample sizes, which provide a good reference for the practitioners who want to apply Robinson’s test. In practice, seasonality and time-varying long-range dependence can often be observed and thus some kind of non-stationarity exists inside the economic and financial data sets. To take into account this kind of phenomena, we review the existing non-stationary processes and we propose a new class of non-stationary stochastic process: the locally stationary k-factor Gegenbauer process. We describe a procedure of estimating consistently the time-varying parameters with the help of the discrete wavelet packet transform (DWPT). The consistency and asymptotic normality of the estimates are proved. The robustness of the algorithm is investigated through simulation study. We also propose the forecast method for this new non-stationary long memory processes. Applications and forecasts based on the error correction term in the error correction model of the Nikkei Stock Average 225 (NSA 225) index and theWest Texas Intermediate (WTI) crude oil price are followed
Le, Minh Hoang. "Various Positioning Algorithms based on Received Signal Strength and/or Time/Direction (Difference) of Arrival for 2D and 3D Scenarios." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS072.pdf.
Full textLocalization has fascinated the researchers for centuries and has motivated a large amount of studies and developments in communications. The aim of positioning problems is to determine the position of the mobile device. Positioning algorithms can be divided into 3 methods: Trilateration, Multilateration and Triangulation. Trilateration utilizes the distances between the mobile device and all the base stations around to estimate the mobile position. These distances can be estimated via the Time of Arrival (ToA) or the Received Signal Strength (RSS). In Multilateration, the position location is based on measured quantities whose values are a function of the Time Difference of Arrival (TDoA) of the two ToAs. As for Triangulation, the directions of the incident signals play the most crucial role in the localization. Thus, it is also referred to as Direction-based Localization. The Direction of Arrival (DoA) of each incident wave is taken into account to solve the positioning problem. Each DoA is expressed by a single angle in 2D scenarios, and a pair of angles in 3D scenarios. There are noticeable differences between Network-Positioning and Self-Positioning. In Network-Positioning, the mobile device is directly localized based on the DoAs of the incident signals; meanwhile, in Self-Positioning, its position is estimated by the Direction Difference of Arrival (DDoA) between each pair of incident signals, because the DoA of each signal arriving to the Mobile Device is ambiguous. In this dissertation, we study all the localization approaches described above. Our spotlight is for Triangulation, which has many sub-scenarios to analyze. The results are obtained by MATLAB simulations
Gonzalez, Camacho Juan-Manuel. "Modélisation stochastique d'une irrigation à la raie." Montpellier 2, 1991. http://www.theses.fr/1991MON20302.
Full textArafeh, Mohamed Hamzeh. "Identification de la loi de comportement élastique de matériaux orthotropes." Compiègne, 1995. http://www.theses.fr/1995COMPD843.
Full textPetit, Jean-Louis. "Inversion élastique 3D de profils sismiques verticaux en milieux stratifiés horizontalement." Montpellier 2, 1997. http://www.theses.fr/1997MON20107.
Full textGusto, Gaëlle. "Estimation de l'intensité d'un processus de Hawkes généralisé double : application à la recherche de motifs corépartis le long d'une séquence d'ADN." Paris 11, 2004. http://www.theses.fr/2004PA112188.
Full textEstimation of the intensity of a generalised Hawkes' process. Application to the detection of correlated words along a DNA sequence. The objective of the thesis is the detection of possible constraints on distances between two genomic motifs along the genome. For this, we model the positions of the occurrences of the motifs with a Hawkes' process. In fact, in this model the intensity is linear and explicitly depends on two functions describing the dependence between the motifs. We non parametrically estimate these functions with splines, using the maximum likelihood method under constraints or the minimization of the least square contrast. In each case, we use some model selection method to determine the optimal number of knots. These knots can be regularly spaced or not. The validation of this estimation procedure is analysed by simulations. Two biological applications are studied
Lu, Zhiping. "Analyse des processus longue mémoire stationnaires et non-stationnaires : estimations, applications et prévisions." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00422376.
Full textFréret, Lucie Viviane Françoise. "Méthodes particulaires en vue de la simulation numérique pour la plasturgie." Châtenay-Malabry, Ecole centrale de Paris, 2007. http://www.theses.fr/2007ECAP1058.
Full textThe framework of this thesis is the simulation of injection processes of thermoplastic materials. The aim is to simulate numerically fluid flows with free boundaries where transition of phase can occur. More precisely, in this work, we have considered bidimensionnal incompressible viscous flows with Lagrangian meshless methods. The lack of consistency of discetrized partial derivatives operator for the MPS method (Moving Particle Semi-implicit) is shown. By using approximated consistent meshless techniques close to MLS approximation (Moving Least Square), we then propose an original Lagrangian meshless method which discretize incompressible Navier-Stokes equations in a purely Lagrangian formulation. Concerning the semi-discretization in time, we use the classical projection method. The resultant fractionnal step method consists in three stages: a prediction step of position and velocity field, a correction step of position particles and a correction step of velocities field. Such a discretization keeps the particle repartition regular and do not need to create or destroy particles. An original numerical treatment to track or capture free surfaces and computation of surface tension force are proposed. We compare numerical results to experiments showing the capability of our method to calculate mono-fluid free surface flows. In a second part, we present a bi-fluid extension using a melt model. The Rayleigh-Taylor results are compared to these obtained by other methods. Because of the limitations of such model, we focus on a bifluid model where each fluid is calculated. This model needs first the non-constant coefficient operator div( a grad) to be discretized. We the use an integral representation ans a quadrature formulae with Gauss's points. The numerical model obtained is a previously three step method adaptation. Precise numerical results show the significance of the approach
Abdelmoula, Amine. "Résolution de problèmes inverses en géodésie physique." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00990849.
Full textBenmansour, Khadidja. "Modélisation mathématique en imagerie cardiaque." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0091/document.
Full textCardiovascular disease are the leading cause of death worldwide. It is therefore vital to study them in order to understand the mechanisms and to prevent and treat these diseases more effectively. Therefore it requires an understanding of anatomy, structure and motion of the heart. In this thesis, we are interested in a first time at the Deformable Elastic Template model which can extract the cardiac anatomy and movement. The Elastic Deformable template is to represent the myocardium by a shape model a priori given that it elastically deforms to fit the specific shape of the patient’s heart. In the first chapter of this thesis, we use a singular perturbation method for accurately segmenting the image. We have demonstrated that if we did tend to 0 the coefficients of elasticity, the mathematical model converge to a solution to the problem of segmentation. As part of a formulation to the least squares sense it is necessary to have an efficient numerical method for solving the transport equation in the least squares sense. The finite element method to treat transport phenomena can not have a weak maximum principle, unless the operator of partial time is separated from the operator of partial space.In Chapter 2 of the thesis, we consider a least squares formulation of space-time and we propose to solve the problem constraints to recover a discrete maximum principle. The final objective of this thesis is the dynamic monitoring of cardiac images or anatomical reconstruction of the heart from 2D slices orthogonal to the long axis of the heart level. The mathematical method we use for this is the optimal transport. In Chapter 3 we analyze the performance of the algorithm proposed by Peyré to calculate the optimal transport of our images. The numerical resolution of optimal transport is a difficult and costly in computation time problem. That is why we propose an adaptive method for determining the proximity operator of the function to be minimized to divide by four the number of iterations required for the algorithm converges
Baloui, Hassan. "Une étude numérique du comportement asymptotique des solutions des équations de Navier-Stokes avec des conditions aux limites sur la pression et divergence non nulle." Cachan, Ecole normale supérieure, 1996. http://www.theses.fr/1996DENS0021.
Full textGdhami, Asma. "Méthodes isogéométriques pour les équations aux dérivées partielles hyperboliques." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4210/document.
Full textIsogeometric Analysis (IGA) is a modern strategy for numerical solution of partial differential equations, originally proposed by Thomas Hughes, Austin Cottrell and Yuri Bazilevs in 2005. This discretization technique is a generalization of classical finite element analysis (FEA), designed to integrate Computer Aided Design (CAD) and FEA, to close the gap between the geometrical description and the analysis of engineering problems. This is achieved by using B-splines or non-uniform rational B-splines (NURBS), for the description of geometries as well as for the representation of unknown solution fields.The purpose of this thesis is to study isogeometric methods in the context of hyperbolic problems usingB-splines as basis functions. We also propose a method that combines IGA with the discontinuous Galerkin(DG)method for solving hyperbolic problems. More precisely, DG methodology is adopted across the patchinterfaces, while the traditional IGA is employed within each patch. The proposed method takes advantageof both IGA and the DG method.Numerical results are presented up to polynomial order p= 4 both for a continuous and discontinuousGalerkin method. These numerical results are compared for a range of problems of increasing complexity,in 1D and 2D
Loth, Manuel. "Algorithmes d'Ensemble Actif pour le LASSO." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845441.
Full textZhang, Ye. "Méthodes itératives hybrides asynchrones sur plateformes de calcul hétérogènes pour la résolution accélérée de grands systèmes linéaires." Thesis, Lille 1, 2009. http://www.theses.fr/2009LIL10129/document.
Full textIn this thesis, we have studied an effective parallel hybrid method of solving linear systems, GMRES / LS-Arnoldi, which accelerates the convergence through knowledge of some eigenvalues calculated in paralled by the Arnoldi method in real cases. The asynchronous nature of this method has the advantage of working with a heterogeneous architecture. A study in complex cases is also done by transforming the complex matrix into a real matrix of double dimension. We have implemented our hybrid GMRES method and the general GMRES method on three different types of hardware platforms. They are respectively the IBM SP series supercomputer, a typically centralized hardware platform; Grid5000, a fully distributed hardware platform, and the Tsubame (Tokyo-tech Supercomputer and Ubiquitously Accessible Massstorage Environment) supercomputer, where some nodes are equipped with an accelerator card. We have tested the performance of general GMRES and hybrid GMRES on these three platforms, observing the influence of various parameters for the performance. A number of meaningful results have been obtained; we can not only improve the performance of parallel computing but also specify the direction of our future efforts
Shi, Yu-E. "Résolution numérique des équations de Saint-Venant par la technique de projection en utilisant une méthode des volumes finis dans un maillage non structuré." Phd thesis, Université de Caen, 2006. http://tel.archives-ouvertes.fr/tel-00130539.
Full textBaysse, Arnaud. "Contributions à l'identification paramétrique de modèles à temps continu : extensions de la méthode à erreur de sortie, développement d'une approche spécifique aux systèmes à boucles imbriquées." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0047/document.
Full textThe research works presented in this thesis are about contributions in continuous time model parametric identication. The rst work is the development of an output error method applied on linear models, in open and closed loop. The algorithms are presented for continuous time models, using in-line or oine approaches. The method is extended to the case of the linear systems containing pure time delay. The developed method is applied to several systems and compared to the best existing methods. The second contribution is the development of a new identication approach for cascaded loop systems. This approach is developed for identifying electromechanical systems. It is based on the use of a generic parametric model of electromechanical drives in closed loop, on the knowledge of the movement laws applied and called excitations, and on the analyse of the time internal signals and their correlations with the parameters to identify. This approach is developed for identifying direct current and synchronous drives. The approach is applied with simulations and experimental tests. The obtained results are compared to best identifying known methods