Thèses sur le sujet « Échelle numérique »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Échelle numérique ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Duclous, Roland. « Modélisation et Simulation Numérique multi-échelle du transport cinétique électronique ». Phd thesis, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00472327.
Texte intégralLegoll, Frédéric. « Contributions à l'étude mathématique et numérique de quelques modèles en simulation multi-échelle des matériaux ». Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00783334.
Texte intégralLes questions étudiées portent sur l'échantillonnage de la mesure de Boltzmann-Gibbs (avec des résultats concernant la non-ergodicité de certains systèmes dynamiques proposés dans la littérature), et sur la construction de dynamiques effectives: supposant que le système suit une dynamique X_t régie par l'équation de Langevin amortie, et se donnant une variable scalaire macroscopique xi(X), lente en un certain sens, nous proposons une dynamique mono-dimensionnelle fermée qui approche xi(X_t), et dont la précision est estimée à l'aide de méthodes d'entropie relative.
Une autre partie du travail consiste à développer de nouveaux schémas numériques pour des problèmes Hamiltoniens hautement oscillants (souvent rencontrés en simulation moléculaire), en suivant une démarche d'homogénéisation en temps. Nous avons aussi proposé une adaptation au contexte Hamiltonien de l'algorithme pararéel, permettant d'obtenir la solution d'un problème d'évolution par des méthodes de calcul parallèle.
La seconde partie du mémoire présente des travaux sur la dérivation de modèles à l'échelle du continuum à partir de modèles discrets (à l'échelle atomistique), pour les solides, et sur le couplage de ces deux modèles, discret et continu. Une première approche consiste à poser le problème sous forme variationnelle (modélisation à température nulle). Nous nous sommes aussi intéressés au cas de systèmes à température finie, modélisés dans le cadre de la mécanique statistique. Dans certains cas, nous avons obtenu des modèles réduits, macroscopiques, où la température est un paramètre, en suivant des approches de type limite thermodynamique.
La troisième partie du mémoire s'intéresse à des questions d'homogénéisation stochastique, pour des équations aux dérivées partielles elliptiques linéaires. Les matériaux sont donc modélisés à l'échelle du continuum. Le constat qui motive notre travail est le fait que, même dans les cas les plus simples sur le plan théorique, les méthodes numériques à ce jour disponibles en homogénéisation stochastique conduisent à des calculs très lourds. Nous avons travaillé dans deux directions. La première consiste à réduire la variance des quantités aléatoires effectivement calculées, seules accessibles en pratique pour approcher la matrice homogénéisée. La seconde est d'étudier le cas de problèmes faiblement stochastiques, en partant du constat que les matériaux hétérogènes, rarement périodiques, ne sont pas pour autant systématiquement fortement aléatoires. Le cas d'un matériau aléatoire pour lequel cet aléa n'est qu'une petite perturbation autour d'un modèle périodique est donc intéressant, et peut se traiter avec un coût calcul beaucoup plus abordable.
Touzeau, Josselyn. « Approches numérique multi-échelle/multi-modèle de la dégradation des matériaux composites ». Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00837874.
Texte intégralDabonneville, Felix. « Développement d'une méthode numérique multi-échelle et multi-approche appliquée à l'atomisation ». Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR018/document.
Texte intégralThe purpose of this work has been to develop a multi-approach and multi-scale numerical method applied to the simulation of two-phase flows involving non miscible, incompressible and isothermal fluids, and more specifically primary atomization. This method is based on a coupled approach between a refined local mesh and a coarser global mesh. The coupling is explicit with refinement in time, i.e. each domain evolves following its own time-step. In order to account for the different scales in space and time of the atomization process, this numerical method couples two different two-phase numerical methods: an interface capturing method in the refined local domain near the injector and a sub-grid method in the coarser global domain in the dispersed spray region. The code has been developed and parallelized in the OpenFOAMR software. It is able to reduce significantly the computational cost of a large eddy simulation of a coaxial atomization, while predicting with accuracy the experimental data
Fall, Mandiaye. « Modélisation multi-échelle de systèmes nanophotoniques et plasmoniques ». Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4777/document.
Texte intégralNanophotonic structures are generally simulated by volume methods, as Finite-difference time-domain (FDTD) method, or Finite element method (FEM). However, for large structures, or metallic plasmonic structures, the memory and time computation required can increase dramatically, and make proper simulation infeasible.Surface methods, like the boundary element method (BEM) have been developed to reduce the number of mesh elements. These methods consist in expressing the electromagnetic filed in whole space as a function of electric and magnetic currents at the surface of scatterers. Combined with the fast multipole method (FMM) that enables a huge acceleration of the calculation of interaction between far mesh elements, very large systems can thus be handled.What we performed is the development of an FMM on a new BEM formalism, based on scalar and vector potentials instead of electric and magnetic currents, for the first time to our knowledge. This method was shown to enable accurate simulation of metallic plasmonic systems, while providing a significant reduction of computation requirements, compared to BEM-alone. Several thousands of unknowns could be handled on a standard computer. More complex nanophotonic systems have been simulated, such as a plasmonic lens consisting of a collection of gold nanorods
Alhammoud, Bahjat. « Circulation générale océanique et variabilité à méso-échelle en Méditerranée Orientale : approche numérique ». Phd thesis, Aix-Marseille 2, 2005. http://pastel.archives-ouvertes.fr/pastel-00001798.
Texte intégralHautefeuille, Martin. « Modélisation numérique des matériaux hétérogènes : une approche EF multi-échelle et orientée composant ». Compiègne, 2009. http://www.theses.fr/2009COMP1802.
Texte intégralConcrete like materials display a matrix/inclusions heterogeneous meso structure visible to the naked eye. In this PhD thesis, an integrated multiscale Finite Element based strategy is proposed. The latter carries out simultaneously structural level computation and mesoscale ones which enriched the macro scale behavior. This method aims at describing global structural collapse accounting for complex failure mechanisms at their proper scales of occurrence. The proposed computational approach derives from non-overlapping domain decomposition techniques. Each element of a structure discretization receives a finner description of the underlying meso structure. Localized Lagrange multipliers ensure a dual compatibility between the macroscale and the mesoscale displacements. A dedicated parallel software architecture has been implemented using the middleware CTL (Component Template Library) developed at the TU Braunschweig. A lattice meso model has been employed in order to describe the fine scale. Each truss element is provided with two kinematics enrichments. Such model is able to account for the heterogeneous phase arrangement and the quasi-brittle behavior of such materials
Nezamabadi, Saeid. « Méthode asymptotique numérique pour l'étude multi échelle des instabilités dans les matériaux hétérogènes ». Thesis, Metz, 2009. http://www.theses.fr/2009METZ046S/document.
Texte intégralThe multiscale modelling of the heterogeneous materials is a challenge in computational mechanics. In the nonlinear case, the effective properties of heterogeneous materials cannot be obtained by the techniques used for linear media because the superposition principle is no longer valid. Hence, in the context of the finite element method, an alternative to mesh the whole structure, including all heterogeneities, is the use of the multiscale finite element method (FE2). These techniques have many advantages, such as taking into account : large deformations at the micro and macro scales, the nonlinear constitutive behaviors of the material, and microstructure evolution. The nonlinear problems in micro and macro scales are often solved by the classical Newton-Raphson procedures, which are generally suitable for solving nonlinear problems but have difficulties in the presence of instabilities. In this thesis, the combination of the multiscale finite element method (FE2) and the asymptotic numerical method (ANM), called Multiscale-ANM, allows one to obtain a numerical effective technique for dealing with the instability problems in the context of heterogeneous materials. These instabilities can occur at both micro and macro levels. Different classes of material constitutive relation have been implemented within our procedure. To improve the multiscale problem conditioning, a second order homogenization technique was also adapted in the framework of Multiscale-ANM technique. Furthermore, to reduce the computational time, some techniques been proposed in this work
Aoubiza, Boujemâa. « Homogénéisation d'un composite multi-échelle : application à une modélisation numérique de l'os haversien compact ». Besançon, 1991. http://www.theses.fr/1991BESA2051.
Texte intégralMoulin, Antoine. « Etude de la plasticité du silicium à une échelle mésoscopique par simulation numérique 3D ». Châtenay-Malabry, Ecole centrale de Paris, 1997. http://www.theses.fr/1997ECAP0542.
Texte intégralCasella, Elisa. « Simulations numériques des processus de méso-échelle en Mer Ligure ». Phd thesis, Université du Sud Toulon Var, 2009. http://tel.archives-ouvertes.fr/tel-00533713.
Texte intégralBiaou, Angelbert. « De la méso-échelle à la micro-échelle : désagrégation spatio-temporelle multifractale des précipitations ». Phd thesis, École Nationale Supérieure des Mines de Paris, 2004. http://pastel.archives-ouvertes.fr/pastel-00001573.
Texte intégralLabit, Benoit. « Transport de chaleur électronique dans un tokamak par simulation numérique directe d'une turbulence de petite échelle ». Phd thesis, Université de Provence - Aix-Marseille I, 2002. http://tel.archives-ouvertes.fr/tel-00261562.
Texte intégralLa thèse proposée ici cherche à déterminer la pertinence d'un modèle fluide non linéaire, électromagnétique, tridimensionnel, basé sur une instabilité particulière pour décrire les pertes de chaleur par le canal électronique et de déterminer les dépendances du transport turbulent associé en fonction de paramètres adimensionnels, dont β et ρ*. L'instabilité choisie est une instabilité d'échange générée par le gradient de température électronique (Electron Temperature Gradient (ETG) driven turbulence en anglais). Ce modèle non linéaire est construit à partir des équations de Braginskii. Le code de simulation développé est global au sens qu'un flux de chaleur entrant est imposé, laissant les gradients libres d'évoluer.
A partir des simulations non linéaires, nous avons pu mettre en évidence trois caractéristiques principales pour le modèle ETG fluide: le transport de chaleur turbulente est essentiellement électrostatique; les fluctuations de potentiel et de pression forment des structures radialement allongées; le niveau de transport observé est beaucoup plus faible que celui mesuré expérimentalement.
L'étude de la dépendance du transport de chaleur en fonction du rapport de la pression cinétique à la pression magnétique a montré un faible impact de ce paramètre mettant ainsi en défaut la loi empirique d'Ohkawa. En revanche, il a été montré sans ambiguïté le rôle important du rayon de Larmor électronique normalisé dans le tranport de chaleur: le temps de confinement est inversement proportionnel à ce paramètre. Enfin, la faible dépendance du transport de chaleur turbulent en fonction du cisaillement magnétique et de l'inverse du rapport d'aspect a été mise en évidence.
Bien que le niveau de transport observé dans les simulations soit plus faible que celui mesuré expérimentalement, nous avons tenté une confrontation directe avec un choc de Tore Supra. Ce tokamak est particulièrement bien désigné pour étudier les pertes de chaleur électronique. En conservant la plupart des paramètres d'un choc bien référencé de Tore Supra, la simulation non linéaire obtenue donne un seuil en gradient de température proche de la valeur expérimentale. Le niveau de transport observé est plus faible d'un facteur cinquante environ que le transport mesuré. Un paramètre important qui n'a pu être conservé est le rayon de Larmor normalisé.
La limitation en ρ* devra être franchie afin de confirmer ces résultats. Enfin une rigoureuse confrontation avec des simulations girocinétiques permettra de disqualifier ou non l'instabilité ETG pour rendre compte des pertes de chaleur observées.
Mots-clés: fusion thermonucléaire, tokamak, plasma, turbulence ETG, simulations numériques
Minh-Hoang, Le. « Modélisation multi-échelle et simulation numérique de l'érosion des sols de la parcelle au bassin versant ». Phd thesis, Université d'Orléans, 2012. http://tel.archives-ouvertes.fr/tel-00780648.
Texte intégralLe, Minh Hoang. « Modélisation multi-échelle et simulation numérique de l'érosion des sols de la parcelle au bassin versant ». Phd thesis, Université d'Orléans, 2012. http://tel.archives-ouvertes.fr/tel-00838947.
Texte intégralLe, Minh Hoang. « Modélisation multi-échelle et simulation numérique de l’érosion des sols de la parcelle au bassin versant ». Thesis, Orléans, 2012. http://www.theses.fr/2012ORLE2059/document.
Texte intégralThe overall objective of this thesis is to study a multiscale modelling and to develop a suitable method for the numerical simulation of soil erosion on catchment scale. After reviewing the various existing models, we derive an analytical solution for the non-trivial coupled system modelling the bedload transport. Next, we study the hyperbolicity of the system with different sedimentation laws found in the literature. Relating to the numerical method, we present the validity domain of the time splitting method, consisting in solving separately the Shallow-Water system (modelling the flow routing) during a first time step for a fixed bed and updating afterward the topography on a second step using the Exner equation. On the modelling of transport in suspension at the plot scale, we present a system coupling the mechanisms of infiltration, runoff and transport of several classes of sediment. Numerical implementation and validation tests of a high order wellbalanced finite volume scheme are also presented. Then, we discuss on the model application and calibration using experimental data on ten 1 m2 plots of crusted soil in Niger. In order to achieve the simulation at the catchment scale, we develop a multiscale modelling in which we integrate the inundation ratio in the evolution equations to take into account the small-scale effect of the microtopography. On the numerical method, we study two well-balanced schemes : the first one is the Roe scheme based on a path conservative, and the second one is the scheme using a generalized hydrostatic reconstruction. Finally, we present a first model application with experimental data of the Ganspoel catchment where the parallel computing is also motived
Labit, Benoît. « Transport de chaleur électronique dans un tokamak par simulation numérique directe d'une turbulence de petite échelle ». Aix-Marseille 1, 2002. http://www.theses.fr/2002AIX11052.
Texte intégralMontroty, Rémi. « Impact d'une assimilation de données à méso-échelle sur la prévision cyclonique ». Toulouse 3, 2008. http://thesesups.ups-tlse.fr/782/.
Texte intégralIs part of the responsibilities of the RSMC of La Reunion and in line with the research topics of the LaCy and the CNRM-GAME, this PhD thesis has been suggested so as to investigate leads that would help better describe and predict tropical cyclones in a mesoscale model over the Indian ocean. Two main topics were investigated : the use of pseudo-observations of total column water vapour (TCWV) derived from the ECMWF analyses in cloudy/rainy areas jointly with a 3D wind bogus so as to constrain position, size and intensity of tropical cycles, and the use of error variances "of the day" in the data assimilation algorithm. We are interested equally in the position and intensity analyses and forecasts : scores and diagnostics thus target those two quantities. Since tropical cyclones exhibit large circular, cloudy/rainy areas which are devoid of observations that can be assimilated, we look at the impacts of those pseudo-observations of TCWV when assimilated in those areas. It is expected that this data can bring new information to the data assimilation system, thus helping constrain the analysis. The pseudo-observations of TCWV in cloudy/rainy areas are derived from an algorithm built by correlating the ECMWF's 1D-VAR TCWV analyses with the SSM/I brightness temperatures, over the southwest Indian ocean bassin. The TCWV data is then assimilated in a 5-week study during the year 2007, study which covered three intense cyclones over the basin. The TCWV data assimilation is done in 3D-VAR mode in the ALADIN Reunion model and is completed by the use of a 3D wind bogus, developed internally at the CRC. The impacts are very positive in terms of direct position error reduction : at analysis, the error was lowered by 75% and through this better positioning, a positive impact was further seen in the forecasts up to 24h, with statistical significance. The TCWV data impact is most notable in terms of structural improvement : when compared to TMI instantaneous rain rates, the experiment that assimilated both the 3D wind bogus and the TCWV data stands out as reproducing the most realistic cyclonic features. The radius of maximum winds, the pattern of spiral rainbands and the general asymetries of the tropical cyclones are better described thanks to the cycling of this data and are in better agreement with the TMI observations. In order to explore the impact of downscaling from ALADIN Reunion, a version of the high resolution model AROME has been implemented over a part of the southwest Indian ocean and covers Reunion island. The sharper, more realistic orography of the AROME Reunion model at 4 km horizontal resolution allows to better capture cyclonic precipitations. .
Bernard, Manuel. « Approche multi-échelle pour les écoulements fluide-particules ». Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/12239/1/Bernard.pdf.
Texte intégralLapointe-Thériault, David. « Vers une résolution numérique du vent dans la couche limite atmosphérique à micro-échelle avec la méthode de simulation des grandes échelles (LES) sous OpenFOAM ». Mémoire, École de technologie supérieure, 2012. http://espace.etsmtl.ca/1123/1/LAPOINTE%2DTH%C3%89RIAULT_David.pdf.
Texte intégralChiapetto, Monica. « Modélisation numérique de l’évolution nanostructurale d’aciers ferritiques sous irradiation ». Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10070.
Texte intégralWe developed object kinetic Monte Carlo (OKMC) models that proved able to predict the nanostructure evolution under neutron irradiation in both RPV and F/M steels. These were modelled, respectively, in terms of Fe-C-MnNi and Fe-C-Cr alloys, but the model was also validated against data obtained on a real RPV steel coming from the surveillance programme of the Ringhals Swedish nuclear power plant. The effects of the substitutional solutes of interest were introduced in our OKMC model under the simplifying assumptions of ‘‘grey alloy’’ scheme, i.e. they were not explicitly introduced in the model, which therefore cannot describe their redistribution under irradiation, but their effect was translated into modified parameters for the mobility of defect clusters. The possible origin of low temperature radiation hardening (and subsequent embrittlement) was also investigated and the models strongly supported the hypothesis that solute clusters segregate on immobile interstitial loops, which act therefore as heterogeneous nucleation sites for the formation of the NiSiPCr- and MnNi-enriched cluster populations experimentally, as observed with atom probe tomography in, respectively, F/M and RPV steels. In other words, the so-called matrix damage would be intimately associated with solute atom clusters and precipitates which increase their stability and reduce their mobility: their ultimate effect is reflected in an alteration of the macroscopic mechanical properties of the investigated alloys. Throughout all our work the obtained results have been systematically validated on existing experimental data, in a process of continuous improvement of the physical hypotheses adopted
Hochet, Bertrand. « Conception de VLSI : applications au calcul numérique ». Grenoble INPG, 1987. http://www.theses.fr/1987INPG0005.
Texte intégralHamdi-Larbi, Olfa. « Étude de la Distribution, sur Système à Grande Échelle, de Calcul Numérique Traitant des Matrices Creuses Compressées ». Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2010. http://tel.archives-ouvertes.fr/tel-00693322.
Texte intégralSuarez, Atias Léandro. « La couche limite et l'hydrodynamique 2D à grande échelle de la zone de surf : une étude numérique ». Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENI033/document.
Texte intégralThis work is about the hydrodynamic processes in the nearshorezone. They are of great importance to estimate the overall dynamicsof the coastal zone. This thesis is divided into two main parts; thefirst one investigates the coastal bottom boundary layer induced bythe interaction of the waves and the bottom when approaching thecoast; the second one is about the evolution of the mean circulationand vorticity induced by an inhomogeneity in the bathymetry orthe wave forcing. A turbulent boundary layer numerical model hasbeen developed and used to simulate the evolution of the oscillatingboundary layers under non-linear waves, of a flume experiment at theLaboratoire des Ecoulements Géophysiques et Industriels (LEGI) inGrenoble, France. The experimental instantaneous velocity profilesand still bed positions, allow defining the non-linear velocity distributionsinduced by the waves within the boundary layer. The numericalmodel coupled with a ad-hoc modeling of the mobile bed motionis able to reproduce the vertical distribution of the non-linearities,and also indicates that the vertical diffusion observed experimentallyis mainly caused by the mobile bed motion induced by the passingwaves. A 2D depth-averaged nonlinear shallow water numericalmodel is used to study the circulation and vorticity in the nearshorezone. This model is validated on a mobile bed experiment in thewave basin of the Laboratoire Hydraulique de France (ARTELIA).The formation of rip currents is forced by a damped wave forcing inthe middle of the wave basin. The numerical model is validated withfree surface and velocity measurements, and by the circulation andvorticity. Using the potential vorticity balance as a diagnosis tooland with a monochromatic wave forcing, an equilibrium between thevorticity generation and advection is observed in the nearshore zone
Este trabajo trata de los procesos hidrodinámicos en la zona litoral,de grande importancia para la dinámica global del flujo costero. Dostemas principales son estudiados. El primero trata de la capa límiteoscilante provocada por la interacción entre el oleaje y el fondo alacercarse a la costa. El segundo tema trata de la evolución de lacirculación y la vorticidad inducida por la batimetría y/o el oleaje.Un modelo de capa límite turbulenta ha sido elaborado y validadopara analizar la evolución de la capa límite oscilante bajo la influenciade oleaje no-lineal, apoyándose en una modelación física, realizada enel canal de olas del LEGI. Los perfiles experimentales instantáneos develocidad y posición del fondo fijo, permiten definir la evolución delas no-linealidades inducidas por las olas dentro de la capa límite. Elmodelo numérico acoplado a una modelación del movimiento del fondomóvil es capaz de reproducir la evolución de estas no-linealidades, yexplica también que la difusión vertical observada experimentalmentees principalmente debida al movimiento vertical del fondo inducidopor el oleaje. El estudio de la circulación y de la vorticidad en zonascosteras se hace mediante un modelo numérico 2D promediado enla vertical de tipo Shallow Water que es validado con los datos deuna experiencia llevada a cabo en la piscina de olas del LaboratoireHydraulique de France (ARTELIA). La formación de corrientes ripse realiza a través de frentes de olas con un déficit de energía en elmedio de la piscina. El modelo numérico es validado con medicionesde superficie libre, de velocidades, y de circulación y vorticidad.Utilizando la ecuación de vortcidad potencial como herramienta dediagnóstico, con un oleaje monocromático se predice un equilibrioentre la generación de vorticidad y su advección por las corrientes
Hamdi-Larbi, Olfa. « Etude de la distribution, sur système à grande échelle, de calcul numérique traitant des matrices creuses compressées ». Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0018.
Texte intégralSeveral scientific applications often use kernels performing computations on large sparse matrices. For reasons of efficiency in time and space, specific compression formats are used for storing such matrices. Most of sparse scientific computations address sparse linear algebra problems. Here two fundamental problems are often considered i. E. Linear systems resolution (LSR) and matrix eigen-values/vector computation (EVC). In this thesis, we address the problem of distributing, onto a Large Scale Distributed System (LSDS), computations performed in iterative methods for both LSR and EVC. The sparse matrix-vector product (SMVP) constitutes a basic kernel in such iterative mathods. Thus, our problem reduces to the SMVP distribution study on an LSDS. In principle, three phases are required for achieving this kind of applications, namely, pre -processing, processing and post-processing. In phase 1, we first proceed to the optimization of four versions of the SMVP algorithm corresponding to four specific matrix compressing formats, then study their performances on sequential target machines. In addition, we focus on the study of load balancing in the procedure of data (i. E. The sparse matrix rows) distribution on a LSDS. Concerning the processing phase, it consists in validating the previous study by a series of experimentations achieved on a volunteer distributed system we installed through using XtremWeb-CH middleware. As to the post-processing phase, it consists in interpreting the experimental results previously obtained in order to deduce adequate conclusions
Aymard, Benjamin. « Simulation numérique d'un modèle multi-échelle de cinétique cellulaire formulé à partir d'équations de transport non conservatives ». Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066254/document.
Texte intégralThe thesis focuses on the numerical simulation of a biomathematical, multiscale model explaining the phenomenon of selection within the population of ovarian follicles, and grounded on a cellular basis. The PDE model consists of a large dimension hyperbolic quasilinear system governing the evolution of cell density functions for a cohort of follicles (around twenty in practice).The equations are coupled in a nonlocal way by control terms involving moments of the solution, defined on either the mesoscopic or macroscopic scale.Three chapters of the thesis, presented in the form of articles, develop the method used to simulate the model numerically. The numerical code is implemented on a parallel architecture. PDEs are discretized with a Finite Volume scheme on an adaptive mesh driven by a multiresolution analysis. Flux discontinuities, at the interfaces between different cellular states, require a specific treatment to be compatible with the high order numerical scheme and mesh refinement.A chapter of the thesis is devoted to the calibration method, which translates the biological knowledge into constraints on the parameters and model outputs. The multiscale character is crucial, since parameters are used at the microscopic level in the equations governing the evolution of the density of cells within each follicle, whereas quantitative biological data are rather available at the mesoscopic and macroscopic levels.The last chapter of the thesis focuses on the analysis of computational performances of the parallel code, based on statistical methods inspired from the field of uncertainty quantification
Cavallaro, Gabriel. « Modélisation multi-échelle et analyse des roulements à billes à bagues déformables ». Lyon, INSA, 2004. http://www.theses.fr/2004ISAL0005.
Texte intégralRolling bearing models are express with the classic hypotheses of rigid rings; it means that only the internal clearance and the contact deformation drive the internal equilibrium. Current space and weight constrains lead to develop new bearing and shaft with thin sections. While the load intensity increase. Consequently, the structure, including the rings, deforms. This structural deformation modifies the internal equilibrium. A multi scale approach is adopted to include this deformation in analytical computer code. This code gives a precise description of the internal equilibrium and the contact deformation and is able to interact with a finite element code to include the structural deformation. The analysis of the contribution of this deformation shows an evolution of the internal clearance. Consequently, the internal equilibrium: angle contact loads, and the mechanical bearing characteristic is substantially modified
Bourel, Christophe. « Étude mathématique et numérique de cristaux photoniques fortement contrastés ». Phd thesis, Université du Sud Toulon Var, 2010. http://tel.archives-ouvertes.fr/tel-00562138.
Texte intégralCaian, Mihaéla. « Maille variable ou domaine limité : quelle solution choisir pour la prévision à échelle fine ? » Toulouse 3, 1996. http://www.theses.fr/1996TOU30226.
Texte intégralRolland, Joran. « Etude numérique à petite et grande échelle de la bande laminaire-turbulente de l'écoulement de Couette plan transitionnel ». Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00755414.
Texte intégralPioch, Claude. « Evaluation prospective de l'intensité de la douleur aux urgences : corrélation entre l'échelle numérique et l 'échelle visuelle analogique ». Montpellier 1, 2000. http://www.theses.fr/2000MON11052.
Texte intégralAlimi, Amel. « Analyse experimentale et numérique multi-échelle du comportement mécanique de l'acier X40CrMoV5-1 : application au matriçage à chaud ». Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4043/document.
Texte intégralDuring hot forming process, tools are subjected to severe, complex and variable loadings. Acting in synergy, they induce degradation of tooling by various damage processes that depend on several factors including the level of loading, the microstructure of materials in contact and the residual stresses in dies. In order to solve this set of problems, it seems particularly important to study the mechanical behaviour of hot forming tooling material. This study is based on different multi-scale experimental and numerical approaches. To identify damage modes, a damaged hot working tool has been investigated with SEM observations, analysis of residual stresses by XRD and hardness measurements. This expertise highlights the complexity and multi-scale of damage. In view of these results, a first phenomenological approach was developed to predict the cartography of thermal and mechanical stresses in the tool. Multi-scale modelling of the X40CrMoV5-1 steel tool cyclic mechanical behaviour is developed by adopting Chaboche-Lemaitre model initially and operating a self-consistent model in a second time. A comparison of results acquired from the different approaches investigated in the thesis is established
Tine, Samir. « Evaluation d'un stimulus de pression par des échelles verbales simples, numériques et visuelles analogiques au sein d'une population âgée hospitalisé : implications pour l'évaluation de la douleur en gériatrie ». Paris 13, 2004. http://www.theses.fr/2004PA130027.
Texte intégralHenon, Joseph. « Elaboration de matériaux poreux géopolymères à porosité multi-échelle et contrôlée ». Limoges, 2012. https://aurore.unilim.fr/theses/nxfile/default/2e0cd75e-4baa-4db6-980a-67278d007105/blobholder:0/2012LIMO4019.pdf.
Texte intégralThis work is focused on the preparation, the characterization, and the control of the porosity in geopolymer foams, synthesized from the mixing of metakaolin, a alkali silicate solution, alkali hydroxide, and silica fume as the pore forming agent. This mixture results in a foam in which hydrogen gas is produced continuously in an evolutive viscous gel. The control of porosity, in consideration of the very high value of pH, requires the establishment of an equilibrium between the kinetics of polycondensation reactions (hardening) and the kinetics of gassing. The influence of different parameters is studied through the characterization of the obtained porous network. The thermal conductivity of the homogeneous samples is measured with a fluxmeter and also with a hot wire method. The values obtained are then discussed in relation to the microstructure and relevant analytical models of the literature. An inverse numerical approach is used to find the thermal conductivity value of the skeleton of the foam λs. In fact, it is difficult to prepare a material with a low pore volume fraction from the same composition. A finite element calculation, coupled with a homogenization method, is applied on Representative Volume Elements constructed in relation with the experimental data. The value of λs is then calculated between 0. 98 and 1. 12 W. M-1. K-1. The foams have pore volume fractions values between 65 and 85% corresponding to thermal conductivity values between 0. 12 and 0. 35 W. M-1. K-1, yielding a good material for thermal insulation
Konaté, Aboubacar. « Méthode multi-échelle pour la simulation d'écoulements miscibles en milieux poreux ». Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066006/document.
Texte intégralThis work deals with the study and the implementation of a multiscale finite element method for the simulation of miscible flows in porous media. The definition of the multiscale basis functions is based on the idea introduced by F. Ouaki. The novelty of this work lies in the combination of this multiscale approach with Discontinuous Galerkin methods (DG) so that these new finite elements can be used on nonconforming meshes composed of cells with various shapes. We first recall the basics of DG methods and their application to the discretisation of a convection-diffusion equation that arises in the flow problem considered in this work. After establishing the existence and uniqueness of a solution to the continuous problem, we prove again the convergence of DG methods towards this solution by establishing an a priori error estimate. We then introduce the nonconforming multiscale finite element method and explain how it can be implemented for this convection-diffusion problem. Assuming that the boundary conditions and the parameters of the problem are periodic, we prove a new a priori error estimate for this method. In a second part, we consider the whole flow problem where the equation, studied in the first part of that work, is coupled and simultaneously solved with Darcy equation. We introduce various synthetic test cases which are close to flow problems encountered in geosciences and compare the solutions obtained with both DG methods, namely the classical method based on the use of a single mesh and the one studied here. For the resolution of the cell problems, we propose new boundary conditions which, compared to classical linear conditions, allow us to better reproduce the variations of the solutions on the interfaces of the coarse mesh. The results of these tests show that the multiscale method enables us to calculate solutions which are close to the ones obtained withDG methods on a single mesh and also enables us to reduce significantly the size of the linear system that has to be solved at each time step
Mallet, Jessy. « Contribution à la modélisation et à la simulation numérique multi-échelle du transport cinétique électronique dans un plasma chaud ». Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14584/document.
Texte intégralIn plasma physics, the transport of electrons can be described from a kinetic point of view or from an hydrodynamical point of view.Classically in kinetic theory, a Fokker-Planck equation coupled with Maxwell equations is used to describe the evolution of electrons in a collisional plasma. More precisely the solution of the kinetic equations is a non-negative distribution function f specifying the density of particles as a function of velocity of particles, the time and the position in space. In order to approximate the solution of such problems, many computational methods have been developed. Here, a deterministic method is proposed in a planar geometry. This method is based on different high order numerical schemes. Each deterministic scheme used presents many fundamental properties such as conservation of flux particles, preservation of positivity of the distribution function and conservation of energy. However the kinetic computation of this accurate method is too expensive to be used in practical computation especially in multi-dimensional space.To reduce the computational time, the plasma can be described by an hydrodynamic model. However for the new high energy target drivers, the kinetic effects are too important to neglect them and replace kinetic calculus by usual macroscopic Euler models.That is why an alternative approach is proposed by considering an intermediate description between the fluid and the kinetic level. To describe the transport of electrons, the new reduced kinetic model M1 proposed is based on a moment approach for Maxwell-Fokker-Planck equations. This moment model uses integration of the electron distribution function on the propagating direction and retains only the energy of particles as kinetic variable. The velocity variable is written in spherical coordinates and the model is written by considering the system of moments with respect to the angular variable. The closure of the moments system is obtained under the assumption that the distribution function is a minimum entropy function. This model is proved to satisfy fundamental properties such as the non-negativity of the distribution function, conservation laws for collision operators and entropy dissipation. Moreover an entropic discretization in the velocity variable is proposed on the semi-discrete model. Moreover the M1 model can be generalized to the MN model by considering N given moments. The N-moments model obtained also preserves fundamental properties such as conservation laws and entropy dissipation. The associated semi-discrete scheme is shown to preserve the conservation properties and entropy decay
Du, Shuimiao. « Investigations numériques multi-échelle et multi-niveau des problèmes de contact adhésif à l'échelle microscopique ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC080.
Texte intégralThe ultimate goal of this work is to provide computationally efficient and robust methodologies for the modelling and solution of a class of Lennard-Jones (LJ) potential-based adhesive contact problems. To alleviate theoretical and numerical pitfalls of the LJ model related to its non-defined and nonbounded characteristics, a model-adaptivity method is proposed to solve the pure-LJ problem as the limit of a sequence of adaptively constructed multilevel problems. Each member of the sequence consists of a model partition between the microscopic LJ model and the macroscopic Signorini model. The convergence of the model-adaptivity method is proved mathematically under some physical and realistic assumptions. On the other hand, the asymptotic numerical method (ANM) is adapted to track accurately instabilities for soft contact problems. Both methods are incorporated in the Arlequin multiscale framework to achieve an accurate resolution at a reasonable computational cost. In the model-adaptivity method, to capture accurately the localization of the zones of interest (ZOI), a two-step strategy is suggested: a macroscopic resolution is used as the first guess of the ZOI localization, then the Arlequin method is used there to achieve a fine scale resolution. In the ANM strategy, the Arlequin method is also used to suppress numerical oscillations and improve accuracy
Vial, Grégory. « Analyse multi-échelle et conditions aux limites approchées pour un problème avec couche mince dans un domaine à coin ». Rennes 1, 2003. https://tel.archives-ouvertes.fr/tel-00005153.
Texte intégralCouespel, Damien. « La désoxygénation de l'océan au cours du 21ème siècle : influence des processus de petite et moyenne échelle ». Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS097.
Texte intégralThe amount of oxygen in the ocean has decreased since the middle of the 20th century. According to climate projections, this will continue into the 21st century with effects on biogeochemical cycles, aquatic organisms and ecosystems. In the subsurface, deoxygenation is controlled by: 1) solubility, determining the amount of oxygen that can be dissolved, 2) respiration, using oxygen to remineralize organic matter and 3) surface/subsurface exchanges. These mechanisms are affected by climate change: 1) the solubility decreases as the temperature increases, 2) the production of organic matter at the surface decreases, thus decreasing the subsurface respiration and 3) the surface/subsurface exchanges are slowed down due to the increase in stratification. The relative contribution of each of these mechanisms to deoxygenation is still poorly understood. To estimate it, we calculated the transport of oxygen through the base of the mixed layer as well as the respiration under the mixed layer in a climate projection. Our results show that each mechanism contributes in equal proportion to deoxygenation. This result was obtained with a low resolution model. However, studies indicate that small-scale processes can influence the mechanisms controlling deoxygenation, but there is still no estimate of their effects. We have therefore developed an idealized configuration allowing us to perform climate change experiments that explicitly solve these processes. In this framework, our results show that small scale processes attenuate 1) deoxygenation and 2) the responses of the mechanisms involved
Chouikhi, Najib. « Production de biométhane à partir de biogaz par technologie de séparation par adsorption PSA : optimisation numérique par approche multi-échelle ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST043.
Texte intégralAs global interest in renewable energy intensifies, biogas production continues to grow as a clean, renewable source. Pressure Swing Adsorption (PSA) is considered as one of the most interesting technologies for the valorization of biogas into biomethane. The great flexibility of the PSA process is linked in some way to its complexity with several design and operating parameters which control the performance of the separation unit. The identification of these parameters by an experimental approach is practically impossible. A numerical study stage is essential for sizing the unit, designing the pressure cycle and identifying the optimal operating conditions before any experimental test.The general objective of the thesis was focused on the development of simulation tools for a biomethane purification process using PSA technology.In a first stage, a simulation based on one-dimensional non-isothermal dynamic model, where the intragranular mass transfer kinetics was modelled using a double driving force (bi-LDF) approximation, was implemented. A carbon molecular sieve (CMS-3K) was selected. This adsorbent ensures a high kinetic selectivity of carbon dioxide with respect to methane (CH4). The optimized cycle, composed of five columns and fifteen steps including three equalization steps and a purge gas recycling allowed a CH4 recovery of 92% with a moderate specific energy consumption of 0.35 kWh/Nm3 , at the same time respecting the grid injection specifications (97% CH4 purity ). The performance obtained is thus compatible with industrial operation.The development of a multidimensional (3D) and multi-scale (column/grain/crystal) numerical model would serve to evaluate the limits of the assumptions and correlations used in usual simulators. The first step consists in simulating the gas flow in an adsorbent bed having a reaslistic stacking.. Thus, an inert packed bed was numerically generated by DEM calculation (discrete element modeling) for a column of laboratory size. The use of OpenFOAM (CFD software) allowed to calculate the three-dimensional tracer gas flow in the column. In parallel an experimental study of the breakthrough curves was carried out using a bed having the same dimensions and characteristics. The breakthrough times and the dispersion-diffusion coefficients calculated and measured were similar. However the simulation showed some divergences in the concentration of the tracer locally in the column, due to difficulties in meshing. The next step will consist in taking into account grain-fluid interactions by considering porous adsorbent grains
Gerandi, Guillaume. « Approches expérimentale et numérique multi-échelle pour modéliser le terme source de masse durant la dégradation du bois pour l’incendie ». Thesis, Corte, 2020. http://hal-univ-corse.archives-ouvertes.fr/view_by_stamp.php?&action_todo=view&id.
Texte intégralThis Ph-D work was done in order to better understand the thermal degradation mechanisms of fuels for fires. The aim was to study the thermal degradation of wood plates by using a multi-scale approach.The thermal degradation of two kinds of wood: white oak (Quercus alba) and common eucalyptus (Eucalyptus globulus) was first investigated at matter scale where samples with weak weight were heated with a thermogravimetric analyzer. Results showed that the thermal degradation of these two kinds of wood could be represented by four steps. From these experimental observations, four kinetic mechanisms were developed: the constituent mechanism, the lumped mechanism, the active mechanism and the simplified mechanism. The kinetic parameters were determined by optimization using the gradient descent algorithm method. The simulations showed that all mechanisms were capable to represent the mass loss of oak and eucalyptus at the different heating rates investigated. The best performance was obtained by the lumped mechanism and the least one by the simplified mechanism. The thermal degradation of these two kinds of wood was also investigated at material scale using a cone calorimeter and thermally thin and thick wood plates. Heat flux densities varying between 18 and 28.5 kW/m² were imposed at the top of the fuel sample in order to avoid the auto-ignition of wood. Two boundary conditions were imposed at the back face of the wood plates. The wood temperature was recorded by thermocouples and an infrared camera. These experimental measurements showed that the higher the heat flux, the faster the mass loss and increase of temperature. Moreover, the char oxidation revealed a two-dimensional front that spread at the surface of the wood plates over time. The thermal degradation of the wood was finally studied numerically. By using the thermally thin wood plates and the experimental temperature field, in order to avoid the evaluation of the thermal properties, the mechanisms developed at matter scale were validated. At this scale, the performance of the different mechanisms is very close. A numerical study was performed with GPYRO in order to predict the temperature and the mass loss for the fine plates. The results were satisfying, thanks to an optimization of the thermal properties of wood and a convolution to represent the two-dimensional phenomenon. For the thermally thick wood plates, the four step kinetic mechanisms allowed to represent the mass loss during the gasification stage but did not enable to predict the whole char oxidation stage
Atiezo, Megbeme Komla. « Modélisation multi-échelle de l'endommagement dynamique des matériaux fragiles sous chargements complexes ». Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0212.
Texte intégralIn this thesis, the modeling of dynamic damage and failure of quasi-materials is addressed using a two-scale approach based on the asymptotic homogenization method. Dynamic damage laws are obtained and numerical simulations of the associated behavior are performed for loadings corresponding to the classical three modes of Fracture Mechanics. The first dynamic model of damage is proposed for the anti-plane shear loading case (Mode III). The damage evolution law is deduced from the Griffith’s energy criterion governing the dynamic propagation of microcracks, by using the homogenization method based on asymptotic expansions. A study of the local macroscopic response predicted by the new model is conducted to highlight the influence of parameters, like the size of the microstructure and the loading rate, on the evolution of damage. Results of macroscopic simulations of dynamic failure and the associated branching instabilities are presented and compared with those reported by experimental observations. The model is implemented in a Finite-Elements/Finite-Differences code using the Matlab software environment. Numerical simulations of rapid failure in opening mode (Mode I) are using a dynamic damage law are presented subsequently. The model is deduced from a microscopic Griffith type criterion describing the dynamic mode I propagation of microcracks, using the asymptotic homogenization approach. The resulting damage law is sensitive to the rate of loading that determines the macroscopic failure mode. Numerical simulations are performed in order to identify the model predictions and the obtained numerical results are compared with the experimental ones. Different tests, like the compact tension and L-shape specimen tests for concrete, the compact compression test for the PMMA brittle polymer and the Kalthoff impact test for limestone rocks, are considered in the numerical simulations. These simulations show that the loading rate essentially determines the macroscopic crack trajectory and the associated branching patterns, in agreement with the experimental results. The law has been implemented in a finite element code Abaqus/Explicit via a VUMAT subroutine. A third model of damage is obtained for the in-plane shear mode (Mode II) through a similar double-scale approach by considering unilateral contact with friction conditions on the microcracks lips. A local study concerning the effects of normal compression and of the friction coefficient is carried out. The influence of the size of the microstructure and the rate of loading on damage evolution is analyzed at the local level. These studies are completed by structural failure simulations of PMMA specimens using the Abaqus/Explicit finite element software
Hammoud, Mohammad. « Modélisation et simulation numérique du couplage entre les milieux discrets et continus ». Phd thesis, Ecole Nationale des Ponts et Chaussées, 2009. http://tel.archives-ouvertes.fr/tel-00469475.
Texte intégralErez, Giacomo. « Modélisation du terme source d'incendie : montée en échelle à partir d'essais de comportement au feu vers l'échelle réelle : approche "modèle", "numérique" et "expérimentale" ». Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0189.
Texte intégralNumerical simulations can provide valuable information to fire investigators, but only if the fire source is precisely defined. This can be done through full- or small-scale testing. The latter is often preferred because these tests are easier to perform, but their results have to be extrapolated in order to represent full-scale fire behaviour. Various approaches have been proposed to perform this upscaling. An example is pyrolysis models, which involve a detailed description of condensed phase reactions. However, these models are not ready yet for investigation applications. This is why another approach was chosen for the work presented here, employing a heat transfer model: the prediction of mass loss rate for a material is determined based on a heat balance. This principle explains the two-part structure of this study: first, a detailed characterisation of heat transfers is performed; then, the influence of these heat transfers on thermal decomposition is studied. The first part focuses on thermal radiation because it is the leading mechanism of flame spread. Flame radiation was characterised for several fuels (kerosene, diesel, heptane, polyurethane foam and wood) and many fire sizes (from 0.3 m up to 3.5 m wide). Measurements included visible video recordings, multispectral opacimetry and infrared spectrometry, which allowed the determination of a simplified flame shape as well as its emissive power. These data were then used in a model (Monte-Carlo method) to predict incident heat fluxes at various locations. These values were compared to the measurements and showed a good agreement, thus proving that the main phenomena governing flame radiation were captured and reproduced, for all fire sizes. Because the final objective of this work is to provide a comprehensive fire simulation tool, a software already available, namely Fire Dynamics Simulator (FDS), was evaluated regarding its ability to model radiative heat transfers. This was done using the data and knowledge gathered before, and showed that the code could predict incident heat fluxes reasonably well. It was thus chosen to use FDS and its radiation model for the rest of this work. The second part aims at correlating thermal decomposition to thermal radiation. This was done by performing cone calorimeter tests on polyurethane foam and using the results to build a model which allows the prediction of MLR as a function of time and incident heat flux. Larger tests were also performed to study flame spread on top and inside foam samples, through various measurements: videos processing, temperatures analysis, photogrammetry. The results suggest that using small-scale data to predict full-scale fire behaviour is a reasonable approach for the scenarios being investigated. It was thus put into practice using FDS, by modifying the source code to allow for the use of a thermal model, in other words defining the fire source based on the model predicting MLR as a function of time and incident heat flux. The results of the first simulations are promising, and predictions for more complex geometries will be evaluated to validate this method
Simone, Agnès. « Etude théorique et simulation numérique de la turbulence compressible en présence de cisaillement ou de variation de volume à grande échelle ». Ecully, Ecole centrale de Lyon, 1995. http://www.theses.fr/1995ECDL0031.
Texte intégralWagner, Sébastien. « Modélisation numérique de la dispersion à méso-échelle de polluants atmosphériques par emboîtement interactif de maillages : application à la zone ESCOMPTE ». Toulon, 2003. http://www.theses.fr/2003TOUL0004.
Texte intégralThis work aims to be a contribution to the numerical techniques used in air quality modelling. Our new multiscale model "MAPOM" (Multiscale Air Pollution Model) simulates mesoscale atmospheric pollutant dispersion. To increase the model accuracy, a new mesh embedding method, allowing grid interactions at the interface, has been implemented and tested. Mass conservation, positivity, and monotonicity are ensured. MAPOM was validated on theoretical test cases. It was then applied over the area of Marseille - Etang de Berre (ESCOMPTE domain). The model and its interactive mesh embedding algorithm were proved to be efficient in handling difficult problems of air quality at mesoscale over complex terrain. The optimization of the memory, and the modular structure of this new model enable a flexible, fast and automatic management of the nested grids, and of the physical and chemical processes
Pimenta, de Miranda Anne. « Application d'un modèle numérique de circulation générale océanique permettant la génération de turbulence de méso-échelle à l'étude de l'Atlantique sud ». Université Joseph Fourier (Grenoble), 1996. http://www.theses.fr/1996GRE10203.
Texte intégralDeclerck, Amandine. « Approche numérique et expérimentale pour une meilleure description physique des processus de subméso-échelle : Application à la mer Méditerranée nord-occidentale ». Thesis, Toulon, 2016. http://www.theses.fr/2016TOUL0014/document.
Texte intégralThe main objective of this work is to improve our knowledge on the impact of the Northern Current (NC)mesoscale activity off the Var coast on its downstream flow and on the links between this boundary current andthe coastal dynamics, particularly in a semi-enclosed bay and shallow: the bay of Hyères. To do so, twonumerical realistic configurations at high-resolution were used. Based on the NEMO code and nested withAGRIF, the first one covers the French Mediterranean coasts at 1,2 km and the second one covers the Varcoasts with a spatial resolution of 400 m.Simulations comparisons with ocean observations (HF radar, ADCP, glider, satellite SST) confirm therealism of the configurations, and show the contribution of a 400 m spatial resolution on the simulateddynamics in the bay but also on the NC and its downstream flow.Finally, a parametrization study on the horizontal advection terms and vertical mixing provide an improvementof the impact of a downscaling in the studied area, and particularly for the simulated dynamics in the semienclosedbay
Rakotoarivelo, Hoby. « Contributions au co-design de noyaux irréguliers sur architectures manycore : cas du remaillage anisotrope multi-échelle en mécanique des fluides numérique ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE012/document.
Texte intégralNumerical simulations of complex flows such as turbulence or shockwave propagation often require a huge computational time to achieve an industrial accuracy level. To speedup these simulations, two alternatives may be combined : mesh adaptation to reduce the number of required points on one hand, and parallel processing to absorb the computation workload on the other hand. However efficiently porting adaptive kernels on massively parallel architectures is far from being trivial. Indeed each task related to a local vicintiy need to be propagated, and it may induce new conflictual tasks though. Furthermore, these tasks are characterized by a low arithmetic intensity and a low reuse rate of already cached data. Besides, new kind of accelerators have arised in high performance computing landscape, involving a number of algorithmic constraints. In a context of electrical power consumption reduction, they are characterized by numerous underclocked cores and a deep hierarchy memory involving asymmetric expensive memory accesses. Therefore, kernels must expose a high degree of concurrency and high cached-data reuse rate to maintain an optimal core efficiency. The real issue is how to structure these data-driven and data-intensive kernels to match these constraints ?In this work, we provide an approach which conciliates both locality constraints and convergence in terms of mesh error and quality. More than a parallelization, it relies on redesign of kernels guided by hardware constraints while preserving accuracy. In fact, we devise a set of locality-aware kernels for anisotropic adaptation of triangulated differential manifold, as well as a lock-free and massively multithread parallelization of irregular kernels. Although being complementary, those axes come from distinct research themes mixing informatics and applied mathematics. Here, we aim to show that our devised schemes are as efficient as the state-of-the-art for both axes
Fuchs, Frank. « Contribution à la reconstruction du bâti en milieu urbain, à l'aide d'images aériennes stéréoscopiques à grande échelle : étude d'une approche structurelle ». Paris 5, 2001. http://www.theses.fr/2001PA058004.
Texte intégral