Dissertations / Theses on the topic 'Modèles basés sur l'activité'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Modèles basés sur l'activité.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Galassi, Luquezi Leonardo. "Including everyday mobility and activities with agent-based models when assessing noise exposure at the metropolitan scale." Electronic Thesis or Diss., Le Mans, 2025. http://www.theses.fr/2025LEMA1004.
Full textEnvironmental noise is a major problem to be confronted in urban areas due to its detrimental health outcomes, and social and economic costs. According to the World Health Organization, approximately one-third of Europeans experience environmental noise-related disturbances during the daytime with approximately one-fifth having their sleep disturbed by traffic noise alone. Noise health impacts are intensified with urbanization which is associated with an increase in the diversity of noise sources. With the objective of establishing a legal framework to address environmental noise, the European Union passed the Environmental Noise Directive and formulated a Common Noise Assessment Methods based on numerical modeling. The European method has strong limitations: (1) the location of the individuals is static as it is assumed that individuals are fixed to their homes; (2) the exposure is estimated based on long-term noise doses, usually daily or annual energetic averages, neglecting detailed noise dynamics; (3) the assessment of the socio-economic groups the most affected by noise is not promoted and the exposed population is treated as homogeneous, without intra-population variability in noise reactions. In parallel, a new generation of studies in environmental health research, inspired by the theories of time geography and spatial ecology, advocates for the integration of mobility and everyday activities in public health research to study the exposure phenomena as a complex spatio-temporal process. One solution for improving noise exposure assessments is the implementation of agent-based exposure frameworks. These frameworks simulate an everyday activity plan for each individual of a modeled population over a 24-hours of a typical working weekday. These individual-level exposure assessments allow the estimation of both agents' trajectories and the noise conditions they encounter over different activity contexts. However, it is very important to understand the relevance of these assessments within the context of models initially designed for transport studies. Then, the initial objective of this doctoral thesis is to situate the exposure estimation with agent-based models in relation to other exposure assessment methodologies, identifying its limitations, its strengths, and the avenues for its enhancement. Furthermore, there is still limited knowledge regarding the utilization and formulation of analyses based on these results. Consequently, the second objective is to develop innovative assessment frameworks, metrics, and indicators in the context of contemporary urban noise problems. To this end, two studies have been conducted: (a) a spatio-temporal assessment of the accessibility of a population to quiet areas, and (b) an assessment of critical exposure areas over the clock in the context of formulating Noise Action Plans. These studies rely on an open-source scenario applied to the Lyon Metropolitan Area composed of three main tools: EQASim, MATSim and NoiseModelling. EQASim is utilized for the modeling of the synthetic population, the assignment of activity plans to agents, and the modeling of transport infrastructure and services. MATSim is utilized for the agent-based transport simulation utilizing the synthetic population and transportation system previously modeled by EQASim. NoiseModelling is utilized to characterize noise from different transport traffic flows and environmental sources. It then calculates sound propagation and estimates the environmental noise conditions. The findings of this study underscore the imperative of conceptualizing exposure assessment as a multifaceted process encompassing four modeling dimensions: the representation of space, time, the individual, and his trajectory of activities. (...)
Delort, Jean-Yves. "Modèles de navigation sur le Web basés sur le contenu." Paris 6, 2003. http://www.theses.fr/2003PA066396.
Full textPhilibert, Thomas. "Modèles basés sur les données pour les écoulements turbulents séparés." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0499.
Full textReynolds-averaged Navier-Stokes (RANS) models are commonly used in industrial applications due to their efficiency in simulating complex fluid flows, offering a practical balance between accuracy and computational cost. However, RANS models have inherent limitations that can affect their accuracy, particularly in cases involving separated or highly complex flows. One significant issue is the discrepancy in Reynolds stresses compared to high-fidelity data obtained from Direct Numerical Simulation (DNS) or experimental measurements. These disparities can lead to inaccuracies in flowcharacteristic predictions, underscoring the need for improved modeling approaches to enhance the reliability of RANS results. In this work, we propose two innovative approaches aimed at addressing these discrepancies while retaining the computational efficiency of theMenter Shear Stress Transport (SST) model. The first approach involves an explicit algebraic model paired with a neural network (specifically a multilayer perceptron, or MLP) to correct the turbulent characteristic time within the RANS model. The second approach targets the Boussinesq approximation itself, using either an MLP or a Generative AdversarialNetwork (GAN) to directly correct this approximation based on data-driven insights. Further, to ensure physically consistent predictions,we integrate realizability constraints within both models by introducing penalization terms. Realizability conditions are essential for turbulence models, as they ensure that the predicted stress tensors align with fundamental physical principles, such as the positivity of turbulent kinetic energy. Incorporating these constraints enhances model stability and reliability during both the training and application phases. The proposed Reynolds stress model, augmented with characteristic time correction (via the MLP) and Boussinesq approximation correction (via MLP or GAN), demonstrates strong predictive performance in both in-sample and out-of-sample flow configurations. Tested across various flow scenarios, the model accurately captures key turbulent flow characteristics while maintaining physically realistic predictions. These findings indicate that our approaches enhance the adaptability and reliability of RANS models, enabling more accurate simulations of complex, industrially relevant flow conditions without the high computational costs associated with DNS
Amate, Laure. "Apprentissage de modèles de formes parcimonieux basés sur des représentations splines." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00456612.
Full textAmate, Laure. "Apprentissage de modèles de formes parcimonieux basés sur les représentations splines." Nice, 2009. http://www.theses.fr/2009NICE4117.
Full textIn many contexts it is important to be able to find compact representations of the collective morphological properties of a set of objects. This is the case of autonomous robotic platforms operating in natural environments that must use the perceptual properties of the objects present in their workspace to execute their mission. This thesis is a contribution to the definition of formalisms and methods for automatic identification of such models. The shapes we want to characterize are closed curves corresponding to contours of objects detected in the scene. We begin with the formal definition of the notion of shape as classes of equivalence with respect to groups of basic geometric operators, introducing two distinct approaches that have been used in the literature: discrete and continuous. The discrete theory, admitting the existence of a finite number of recognizable landmarks, provides in an obvious manner a compact representation but is sensible to their selection. The continuous theory of shapes provides a more fundamental approach, but leads to shape spaces of infinite dimension, lacking the parsimony of the discrete representation. We thus combine in our work the advantages of both approaches representing shapes of curves with splines: piece-wise continuous polynomials defined by sets of knots and control points. We first study the problem of fitting free-knots splines of varying complexity to a single observed curve. The trade-o_ between the parsimony of the representation and its fidelity to the observations is a well known characteristic of model identification using nested families of increasing dimension. After presenting an overview of methods previously proposed in the literature, we single out a two-step approach which is formally sound and matches our specific requirements. It splits the identification, simulating a reversible jump Markov chain to select the complexity of the model followed by a simulated annealing algorithm to estimate its parameters. We investigate the link between Kendall's shape space and spline representations when we take the spline control points as landmarks. We consider now the more complex problem of modeling a set of objects with similar morphological characteristics. We equate the problem to finding the statistical distribution of the parameters of the spline representation, modeling the knots and control points as unobserved variables. The identified distribution is the maximizer of a marginal likelihood criterion, and we propose a new Expectation-Maximization algorithm to optimize it. Because we may want to treat a large number of curves observed sequentially, we adapt an iterative (on-line) version of the EM algorithm recently proposed in the literature. For the choice of statistical distributions that we consider, both the expectation and the maximization steps must resort to numerical approximations, leading to a stochastic/on-line variant of the EM algorithm that, as far as we know, is implemented here for the first time
Cortes, Cornax Mario. "Amélioration continue de chorégraphie de services : conception et diagnostic basés sur les modèles." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM020/document.
Full textOrganizations' business processes become increasingly complex and often depend on processes and services provided by other organizations. The term inter-organizational process appears to describe a process that goes beyond an organization's boundaries and integrates a set of processes with a common goal. From a technical point of view, organizations implement their internal processes as service orchestrations. To enable them to interact, it is essential to establish communication protocols to promote a common understanding among the participating services as well as ensuring their interoperability. In this context the service choreography concept appears. Choreography refers to a business contract describing the way business participants with a common goal coordinate their interactions. The overall point of view given by choreographies complements the local point of view given by orchestrations. Our work aims to understand and study the concept of choreography where we consider the intentional level (goals), the organizational level which is often captured by graphical models and the operational level that is focused on technical details. To do so, we propose a continuous improvement approach focusing on the design and diagnosis phases. We rely on models to better understand, build, analyze and manage the complexity of choreographies
Salvati, Jean. "Les énigmes dans les modèles d'évaluation basés sur la consommation : analyse et solutions." Paris 9, 1995. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1995PA090004.
Full textThis thesis adresses three puzzles in dynamic consumption-based asset pricing theory : mehra and prescott's equity premium and risk-free rate puzzles, and the excess volatility puzzle. Chapter 1 reviews traditional consumption-based asset pricing models with complete markets and time-separable preferences. Chapter 2 reviews the evidence on these models' inability to match the estimated first and second moments of asset returns for a reasonable value of the coefficient of relative risk aversion. Chapter 3 studies the impact of consumption habit persistence on a representative-agent model's predictions. The habit persistence model matches the estimated average asset returns for a coefficient of relative risk aversion close to twelve, but is unable to match the asset returns' standard deviations. Chapter 4 reviews three models with market incompleteness due to uninsurable shocks on labor income. These models fail to provide a solution to the puzzles. Chapter 5 introduces a model with incomplete markets, borrowing constraints, and incomplete participation in the stock market (caused by fixed information education costs). For sufficiently strict borrowing constraints, the model is able to match the estimated first and second moments of asset returns for a coefficient of relative risk aversion equal to 2
Bénazéra, Emmanuel. "Diagnostic et reconfiguration basés sur des modèles hybrides concurrents : application aux satellites autonomes." Toulouse 3, 2003. http://www.theses.fr/2003TOU30105.
Full textMárquez, Borbόn Raymundo. "Nouveaux schémas de commande et d'observation basés sur les modèles de Takagi-Sugeno." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0040/document.
Full textThis thesis addresses the estimation and controller design for continuous-time nonlinear systems. The methodologies developed are based on the Takagi-Sugeno (TS) representation of the nonlinear model via the sector nonlinearity approach. All strategies intend to get more relaxed conditions.The results presented for controller design are split in two parts. The first part is about standard TS models under control schemes based on: 1) a quadratic Lyapunov function (QLF); 2) a fuzzy Lyapunov function (FLF); 3) a line-integral Lyapunov functions (LILF); 4) a novel non-quadratic Lyapunov functional (NQLF). The second part concerns to TS descriptor models. Two strategies are proposed: 1) within the quadratic framework, conditions based on a general control law and some matrix transformations; 2) an extension to the nonquadratic approach based on a line-integral Lyapunov function (LILF) using non-PDC control law schemes and the Finsler’s Lemma; this strategy offers parameter-dependent linear matrix inequality (LMI) conditions instead of bilinear matrix inequality (BMI) constraints for second-order systems. On the other hand, the problem of the state estimation for nonlinear systems via TS models is also addressed considering: a) the particular case where premise vectors are based on measured variables and b) the general case where premise vectors can be based on unmeasured variables. Several examples have been included to illustrate the applicability of the obtained results
Aguessy, François-Xavier. "Évaluation dynamique de risque et calcul de réponses basés sur des modèles d’attaques bayésiens." Thesis, Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0016/document.
Full textInformation systems constitute an increasingly attractive target for attackers. Given the number and complexity of attacks, security teams need to focus their actions, in order to select the most appropriate security controls. Because of the threat posed by advanced multi-step attacks, it is difficult for security operators to fully cover all vulnerabilities when deploying countermeasures. In this PhD thesis, we build a complete framework for static and dynamic risk assessment including prior knowledge on the information system and dynamic events, proposing responses to prevent future attacks. First, we study how to remediate the potential attacks that can happen in a system, using logical attack graphs. We build a remediation methodology to prevent the most relevant attack paths extracted from a logical attack graph. In order to help an operator to choose between several remediation candidates, we rank them according to a cost of remediation combining operational and impact costs. Then, we study the dynamic attacks that can occur in a system. Attack graphs are not directly suited for dynamic risk assessment. Thus, we extend this mode to build dynamic risk assessment models to evaluate the attacks that are the most likely. The hybrid model is subdivided in two complementary models: (1) the first ones analysing ongoing attacks and provide the hosts' compromise probabilities, and (2) the second ones assessing the most likely future attacks. We study the sensitivity of their probabilistic parameters. Finally, we validate the accuracy and usage of both models in the domain of cybersecurity, by building them from a topological attack graph
Aguessy, François-Xavier. "Évaluation dynamique de risque et calcul de réponses basés sur des modèles d’attaques bayésiens." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0016.
Full textInformation systems constitute an increasingly attractive target for attackers. Given the number and complexity of attacks, security teams need to focus their actions, in order to select the most appropriate security controls. Because of the threat posed by advanced multi-step attacks, it is difficult for security operators to fully cover all vulnerabilities when deploying countermeasures. In this PhD thesis, we build a complete framework for static and dynamic risk assessment including prior knowledge on the information system and dynamic events, proposing responses to prevent future attacks. First, we study how to remediate the potential attacks that can happen in a system, using logical attack graphs. We build a remediation methodology to prevent the most relevant attack paths extracted from a logical attack graph. In order to help an operator to choose between several remediation candidates, we rank them according to a cost of remediation combining operational and impact costs. Then, we study the dynamic attacks that can occur in a system. Attack graphs are not directly suited for dynamic risk assessment. Thus, we extend this mode to build dynamic risk assessment models to evaluate the attacks that are the most likely. The hybrid model is subdivided in two complementary models: (1) the first ones analysing ongoing attacks and provide the hosts' compromise probabilities, and (2) the second ones assessing the most likely future attacks. We study the sensitivity of their probabilistic parameters. Finally, we validate the accuracy and usage of both models in the domain of cybersecurity, by building them from a topological attack graph
Placzek, Antoine. "Construction de modèles d'ordre réduit non-linéaires basés sur la décomposition orthogonale propre pour l'aéroélasticité." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2009. http://tel.archives-ouvertes.fr/tel-00461691.
Full textBoubekeur, Fatiha. "Contribution à la définition de modèles de recherche d'information flexibles basés sur les CP-Nets." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00355843.
Full textBoubekeur-Amirouche, Fatiha. "Contribution à la définition de modèles de recherche d'information flexibles basés sur les CP-Nets." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/257/.
Full textThis thesis addresses two main problems in IR: automatic query weighting and document semantic indexing. Our global contribution consists on the definition of a theoretical flexible information retrieval (IR) model based on CP-Nets. The CP-Net formalism is used for the graphical representation of flexible queries expressing qualitative preferences and for automatic weighting of such queries. Furthermore, the CP-Net formalism is used as an indexing language in order to represent document representative concepts and related relations in a roughly compact way. Concepts are identified by projection on WordNet. Concept relations are discovered by means of semantic association rules. A query evaluation mechanism based on CP-Nets graph similarity is also proposed
Turchi, Hervé. "Etude de modèles de déformation basés sur l'analyse multirésolution : Application en Téléopération Assistée par Ordinateur." Montpellier 2, 1998. http://www.theses.fr/1998MON20260.
Full textKRüGER, Eiko. "Développement d'algorithmes de gestion optimale des systèmes de stockage énergétique basés sur des modèles adaptatifs." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT096/document.
Full textLimited fossil energy resources and the prospect of impending climate change have led the European Union to engage in a restructuring of the electricity sector towards a sustainable, economical and reliable power supply. Energy storage systems have the potential of an enabling technology for the integration of renewable energy sources, which underlies this transition. They allow the delivery of energy produced by a local source to the electric grid to be shifted in time and can compensate random fluctuations in power output. Through such smoothing and levelling, energy storage systems can make the production of variable renewable sources predictable and amenable to control.In order to observe scheduled production and their commitments toward the grid operator, renewable power plants equipped with storage systems make use of an energy management system. While direct control ensures tracking of the current production setpoint, energy management employs constrained optimization methods from operations research to organize the usage of the storage systems. The complexity of the storage system model used in optimization must frequently be adapted to the specific application. Batteries show non-linear state-dependent behavior. Their model must be simplified for use in the most common optimization algorithms. Moreover, precise battery models based on physical modelling require time-consuming controlled testing for parameterization. Lastly, the electrical behavior of a battery evolves with aging which calls for regular recalibration of the model.This thesis presents a methodology for on-line battery model identification and the use of such adaptive models in optimal management of an electrical plant with energy storage. After a summary of battery models, observer methods for on-line identification based on control theory are developed for the case of an equivalent circuit model. The extraction of a simplified model for energy management is described and compared to direct regression analysis of the operational data. The identification methods are tested for a real industrial-sized storage system operated in a photovoltaic power plant on the island of La Réunion. Model identification applied to data from an earlier battery aging study shows the use of the method for tracking the state-of-health.The formulation of optimization problems encountered in the production scheduling of a photovoltaic power plant with energy storage is developed incorporating the adaptive battery models. Mixed-integer linear programming and dynamic programming implementations are used in case studies based on market integration of the plant or regulated feed-in tariffs. A simulation model based on the outline of the plant control architecture is used to simulate the operation and evaluate the solutions. Different configurations of the management system are tested, including static and variable battery models and the integration of battery aging. A statistical analysis of the results obtained for multiple cases of photovoltaic production and forecast error shows the advantage of using variable battery models in the study case
Trenquier, Henri. "Analyse et explication par des techniques d'argumentation de modèles d'intelligence artificielle basés sur des données." Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30355.
Full textClassification is a very common task in Machine Learning (ML) and the ML models created to perform this task tend to reach human comparable accuracy, at the cost of transparency. The surge of such AI-based systems in the public's daily life has created a need for explainability. Abductive explanations are one of the most popular types of explanations that are provided for the purpose of explaining the behavior of complex ML models sometimes considered as black-boxes. They highlight feature-values that are sufficient for the model to make a prediction. In the literature, they are generated by exploring the whole feature space, which is unreasonable in practice. This thesis tackles this problem by introducing explanation functions that generate abductive explanations from a sample of instances. It shows that such functions should be defined with great care since they cannot satisfy two desirable properties at the same time, namely existence of explanations for every individual decision (success) and correctness of explanations (coherence). This thesis provides a parameterized family of argumentation-based explanation functions, each of which satisfies one of the two properties. It studies their formal properties and their experimental behaviour on different datasets
Le, Courtois Olivier. "Impact des évènements informatifs sur l'activité financière des entreprises." Lyon 1, 2003. http://www.theses.fr/2003LYO10197.
Full textRobin, Ludovic. "Vérification formelle de protocoles basés sur de courtes chaines authentifiées." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0019.
Full textModern security protocols may involve humans in order to compare or copy short strings betweendifferent devices. Multi-factor authentication protocols, such as Google 2-factor or 3D-Secure are typical examplesof such protocols. However, such short strings may be subject to brute force attacks. In this thesis we propose asymbolic model which includes attacker capabilities for both guessing short strings, and producing collisions whenshort strings result from an application of weak hash functions. We propose a new decision procedure for analyzing(a bounded number of sessions of) protocols that rely on short strings. The procedure has been integrated in theAKISS tool and tested protocols from the ISO/IEC 9798-6:2010 standard
Zheng, Jun. "Elicitation des Préférences pour des Modèles d'Agrégation basés sur des Points de référence : Algorithmes et Procédures." Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00740655.
Full textLoger, Benoit. "Modèles d’optimisation basés sur les données pour la planification des opérations dans les Supply Chain industrielles." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0389.
Full textWith the increasing complexity of supply chains, automated decision-support tools become necessary in order to apprehend the multiple sources of uncertainty that may impact them, while maintaining a high level of performance. To meet these objectives, managers rely more and more on approaches capable of improving the resilience of supply chains by proposing robust solutions that remain valid despite uncertainty, to guarantee both a quality of service and a control of the costs induced by the production, storage and transportation of goods. As data collection and analysis become central to define the strategy of companies, a proper usage of this information to characterize more precisely these uncertainties and their impact on operations is becoming a major challenge for optimizing modern production and distribution systems. This thesis addresses these new challenges by developing different mathematical optimization methods based on historical data, with the aim of proposing robust solutions to several supply and production planning problems. To validate the practical relevance of these new techniques, numerical experiments on various applications compare them with several other classical approachesfrom the literature. The results obtained demonstrate the value of these contributions, which offer comparable average performance while reducing their variability in an uncertain context. In particular, the solutions remain satisfactory when confronted with extreme scenarios, whose probability of occurrence is low. Finally, the computational time of the procedures developed remain competitive, making them suitable for industrial-scale applications
Lequesne, Justine. "Tests statistiques basés sur la théorie de l'information, applications en biologie et en démographie." Caen, 2015. http://www.theses.fr/2015CAEN2007.
Full textEntropy and divergence, basic concepts of information theory, are used to construct statistical tests in two different contexts, which divides this thesis in two parts. The first part deals with goodness-of-fit tests for distributions with densities based on the maximum entropy principle under generalized moment constraints for Shannon entropy, and also for generalized entropies. These tests are then applied in biology for a study of DNA replication. The second part deals with a comparative analysis of contingency tables through tests based on the Kullback-Leibler divergence between a distribution with fixed marginals, and another acting as a reference. Application of this new method to real data sets, and especially to results of PISA surveys in demography, show its advantages as compared to classical methods
Robin, Ludovic. "Vérification formelle de protocoles basés sur de courtes chaines authentifiées." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0019/document.
Full textModern security protocols may involve humans in order to compare or copy short strings betweendifferent devices. Multi-factor authentication protocols, such as Google 2-factor or 3D-Secure are typical examplesof such protocols. However, such short strings may be subject to brute force attacks. In this thesis we propose asymbolic model which includes attacker capabilities for both guessing short strings, and producing collisions whenshort strings result from an application of weak hash functions. We propose a new decision procedure for analyzing(a bounded number of sessions of) protocols that rely on short strings. The procedure has been integrated in theAKISS tool and tested protocols from the ISO/IEC 9798-6:2010 standard
Engilberge, Sylvain. "Nouveaux développements en biologie structurale basés sur des complexes de lanthanide." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAY094/document.
Full textSince the first protein structure determined in the 1950s, X-ray crystallography emerged as a method of choice to obtain structural data at atomic resolution. Despite technological advances such as new synchrotron sources, hybrid pixel detectors, and high-performance softwares, obtaining an electron density map of a biological macromolecule is always limited by two major bottlenecks namely, producing high quality single crystals and solving the phase problem.This thesis presents a new lanthanide complex called “Crystallophore” (Tb-Xo4). This compound has been developed in collaboration with Olivier Maury and François Riobé of the Laboratoire de chimie Matériaux Fonctionnels et Photonique (ENS –Lyon). The design of this new complex is based on fifteen years of development in the field of structural biology. This thesis highlights the effects of Tb-Xo4 on the crystallisation and the structure determination of biological macromolecules. Indeed, the addition of Tb-Xo4 to a protein solution induces a large number of new and unique crystallization conditions. The analysis of the structures of several proteins co-crystallized with Tb-Xo4 allowed both, to highlight the high phasing power of Tb-Xo4 but also to describe finely the supramolecular interaction of the complex with the macromolecules. This work led to protocols dedicated to crystallization and phasing assisted with Tb-Xo4. Finally, this thesis leads to a model explaining the unique properties of this new lanthanide complex
Veilleux, Dery. "Modèles de dépendance avec copule Archimédienne : fondements basés sur la construction par mélange, méthodes de calcul et applications." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/33039.
Full textThe law of large numbers, which states that statistical characteristics of a random sample will converge to the characteristics of the whole population, is the foundation of the insurance industry. Insurance companies rely on this principle to evaluate the risk of insured events. However, when we introduce dependencies between each component of the random sample, it may drastically affect the overall risk profile of the sample in comparison to the whole population. This is why it is essential to consider the effect of dependency when aggregating insurance risks from which stems the interest given to dependence modeling in actuarial science. In this thesis, we study dependence modeling in a portfolio of risks for which a mixture random variable (rv) introduces dependency. After introducing the use of exponential mixtures in actuarial risk modeling, we show how this mixture construction can define Archimedean copulas, a powerful tool for dependence modeling. First, we demonstrate how an Archimedean copula constructed via a continuous mixture can be approximated with a copula constructed by discrete mixture. Then, we derive explicit expressions for a few quantities related to the aggregated risk. The common mixture representation of Archimedean copulas is then at the basis of a computational strategy proposed to compute the distribution of the sum of risks in a general setup. Such results are then used to investigate risk models with respect to aggregation, capital allocation and ruin problems. Finally, we discuss an extension to nested Archimedean copulas, a general case of dependency via common mixture including different levels of dependency.
Résumé en espagnol
Bouhaddou, Imane. "Vers une optimisation de la chaine logistique : proposition de modèles conceptuels basés sur le PLM (Product Lifecycle Management)." Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0026/document.
Full textAIt is recognized that competition is shifting from “firm versus firm” perspective to “supply chain versus supply chain” perspective. Therefore, the ability to optimize the supply chain is becoming the critical issue for companies to win the competitive advantage. Furthermore, all members of a given supply chain must work together to respond to the changes of market demand rapidly. In the actual context, enterprises not only must enhance their relationships with each others, but also need to integrate their business processes through product life cycle activities. This has led to the emergence of a collaborative product lifecycle management commonly known as PLM. The objective of this thesis is to define a methodological approach which answers to the following problematic: How can PLM contribute to supply chain optimization ? We adopt, in this thesis, a hybrid approach combining PLM and mathematical models to optimize decisions for simultaneous design of the product and its supply chain. We propose conceptual models to solve formally the compromise between PLM and mathematical models for supply chain optimization. Unlike traditional centralized approaches used to treat the problem of integrated design of the product and its supply chain which generate complex mathematical models, we adopt an approach combining centralized decisions while integrating the constraints of the different supply chain partners during the product design and decentralized decisions when it comes to locally optimize each supply chain partner. The decentralized approach reduces the complexity of solving mathematical models and allows the supply chain to respond quickly to the evolution of local conditions of each partner. PLM will assure the integration of the different supply chain partners. Indeed, the information centralization by the PLM enables to take into consideration the dependence between these partners, improving therefore local optimization results
Dubois, Anne. "Tests de bioéquivalence basés sur les modèles non linéaires à effets mixtes : application à la pharmacocinétique des médicaments biologiques." Paris 7, 2011. http://www.theses.fr/2011PA077188.
Full textThis thesis considers bioequivalence tests to analyse crossover trials using nonlinear mixed effects modelling (NLMEM). AH proposed applications concern biologic drugs. We first compared the usual bioequivalence tests based on the individual parameters estimated by non compartmental analysis (NCA) to those based on the empirical Bayes estimates (EBE) obtained from NLMEM. We observed an inflation of the type I error of the EBE-based tests which is linked to the shrinkage and thus limits the use of such tests. Then, we studied the Wald test and the likelihood ratio test (LRT) based on a NLMEM including the treatment, period, and sequence effects, in top of the between and within-subject variability. We proposed a method to perform the Wald test on a secondary parameter of the structural model. For the small sample size designs, the type I error of the Wald test and LRT is inflated. For the Wald test, this inflation can be corrected using the empirical estimation standard error or an approach based on the weighting of the estimation variance. These results were applied to the analysis of two bioequivalence trials comparing different formulations of somatropin or erythropoietin. Finally, we demonstrated that the use of the Fisher information matrix allows the evaluation and the optimisation of designs of bioequivalence crossover trials which will be analysed by NLMEM. To conclude, this work underlines the interest of the bioequivalence analysis based on NLMEM and applied to clinical trials on biologic drugs
De, Almeida Pereira Dalay Israel. "Analyse et spécification formelle des systèmes d’enclenchement ferroviaire basés sur les relais." Thesis, Centrale Lille Institut, 2020. http://www.theses.fr/2020CLIL0009.
Full textRelay-based Railway Interlocking Systems (RIS) are critical systems and must be specified and safety proved in order to guarantee the absence of hazards during their execution. However, this is a challenging task, since Relay-based RIS are generally only structurally modelled in a way that their behavioural analysis are made manually based on the experts knowledge about the system. Thus, the existence of a RIS behavioural formal description is imperative in order to be able to perform safety proofs. Furthermore, as Computer-based RIS tend to be less expensive, more maintainable and extendable, the industry has interest in the existence of a methodology for transforming the existing Relay-based RIS into Computer-based RIS.Formal specification methodologies are grounded in strong mathematical foundations that allow the systems safety proof. Besides, many formal specification languages support not only the verification, but also the implementation of these systems through a formal development process. Thus, Formal Methods may be the key in order to prove the RIS safety and implement them with computer-based technologies.This thesis addresses two main propositions. Firstly, it presents an analysis of the relay diagrams information and a formalisation of the Relay-based RIS structure and behaviour based on mathematical expressions as a way to create a certain level of formalisation of the systems. The resulting model can be extended and adapted in order to conform to different railway contexts and it can be used in order to support the specification of these systems in different formal specification languages. Then, this thesis presents how the RIS formal model can be adapted in order to formally specify these systems in B-method, a formal specification language with a successful history in the railway field and which allows the system safety proof and implementation as computer-based systems.As a result, this thesis presents a complete methodology for the specification and verification of Relay-based Railway Interlocking Systems, giving support for the systems safety proof in different contexts and for their specification and implementation in many different formal languages
De, Almeida Pereira Dalay Israel. "Analyse et spécification formelle des systèmes d’enclenchement ferroviaire basés sur les relais." Thesis, Ecole centrale de Lille, 2020. http://www.theses.fr/2020ECLI0009.
Full textRelay-based Railway Interlocking Systems (RIS) are critical systems and must be specified and safety proved in order to guarantee the absence of hazards during their execution. However, this is a challenging task, since Relay-based RIS are generally only structurally modelled in a way that their behavioural analysis are made manually based on the experts knowledge about the system. Thus, the existence of a RIS behavioural formal description is imperative in order to be able to perform safety proofs. Furthermore, as Computer-based RIS tend to be less expensive, more maintainable and extendable, the industry has interest in the existence of a methodology for transforming the existing Relay-based RIS into Computer-based RIS.Formal specification methodologies are grounded in strong mathematical foundations that allow the systems safety proof. Besides, many formal specification languages support not only the verification, but also the implementation of these systems through a formal development process. Thus, Formal Methods may be the key in order to prove the RIS safety and implement them with computer-based technologies.This thesis addresses two main propositions. Firstly, it presents an analysis of the relay diagrams information and a formalisation of the Relay-based RIS structure and behaviour based on mathematical expressions as a way to create a certain level of formalisation of the systems. The resulting model can be extended and adapted in order to conform to different railway contexts and it can be used in order to support the specification of these systems in different formal specification languages. Then, this thesis presents how the RIS formal model can be adapted in order to formally specify these systems in B-method, a formal specification language with a successful history in the railway field and which allows the system safety proof and implementation as computer-based systems.As a result, this thesis presents a complete methodology for the specification and verification of Relay-based Railway Interlocking Systems, giving support for the systems safety proof in different contexts and for their specification and implementation in many different formal languages
Thomas, Marie. "Influence de l'activité de l'eau sur les interactions lactose / β-lactoglobuline de poudres laitières modèles lyophilisées." Vandoeuvre-les-Nancy, INPL, 2004. http://docnum.univ-lorraine.fr/public/INPL_T_2004_THOMAS_M.pdf.
Full textModel milk powders were prepared by co-lyophilization of lactose and β-lactoglobulin and stored at various water activities (aw) to better understand interactions between the two components in a dehydrated state. The presence of β-lactoglobulin influenced the critical aw where physico-chemical modifications like non-enzymatic browning, particles caking and particularly lactose crystallization, occurred. Moisture sorption isotherms, differential scanning calorimetry and X-ray diffraction demonstrated that β-lactoglobulin can both delay crystallization and increase crystallization aw to 0. 54 and even 0. 59 in model powders containing 10 to 30%, and 40% β-lactoglobulin respectively, compared to 0. 43 for pure lactose crystallization. This stabilizing effect was due to the development ofboth covalent interactions and hydrogen bonding between lactose and β-lactoglobulin, as shown by non-enzymatic browning and spectroscopic studies. When model powders were stored at the aw just below their critical aw, preferential protein hydration occurred: hydrogen bonding between lactose and β- lactoglobulin decreased, whereas it increased between water and β-lactoglobulin. This change in hydrogen bonding seems to be responsible for the stabilizing effect against crystallization observed in model milk powders
Bissessur, Yasdeo. "Identification de la non-linéarité d'un système de contrôle de vibration par des modèles basés sur les réseaux de neurones." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ35722.pdf.
Full textBey, Aurélien. "Reconstruction de modèles CAO de scènes complexes à partir de nuages de points basés sur l’utilisation de connaissances a priori." Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10103/document.
Full text3D models are often used in order to plan the maintenance of industrial environments.When it comes to the simulation of maintenance interventions, these 3D models have todescribe accurately the actual state of the scenes they stand for. These representationsare usually built from 3D point clouds that are huge set of 3D measurements acquiredin industrial sites, which guarantees the accuracy of the resulting 3D model. Althoughthere exists many works addressing the reconstruction problem, there is no solution toour knowledge which can provide results that are reliable enough to be further used inindustrial applications. Therefore this task is in fact handled by human experts nowadays.This thesis aims at providing a solution automating the reconstruction of industrialsites from 3D point clouds and providing highly reliable results. For that purpose, ourapproach relies on some available a priori knowledge and data about the scene to beprocessed. First, we consider that the 3D models of industrial sites are made of simpleprimitive shapes. Indeed, in the Computer Aided Design (CAD) field, this kind of scenesare described as assemblies of shapes such as planes, spheres, cylinders, cones, tori, . . . Ourown work focuses on planes, cylinders and tori since these three kind of shapes allow thedescription of most of the main components in industrial environment. Furthermore, weset some a priori rules about the way shapes should be assembled in a CAD model standingfor an industrial facility, which are based on expert knowledge about these environments.Eventually, we suppose that a CAD model standing for a scene which is similar to theone to be processed is available. This a priori CAO model typically comes from the priorreconstruction of a scene which looks like the one we are interested in. Despite the factthat they are similar theoretically, there may be significant differences between the sitessince each one has its own life cycle.Our work first states the reconstruction task as a Bayesian problem in which we haveto find the most probable CAD Model with respect to both the point cloud and the a prioriexpectations. In order to reach the CAD model maximizing the target probability, wepropose an iterative approach which improves the solution under construction each time anew randomly generated shape is tried to be inserted in it. Thus, the CAD model is builtstep by step by adding and removing shapes, until the algorithm gets to a local maximumof the target probability
Izzet, Guillaume. "Modèles supramoléculaires de cupro-enzymes à site mononucléaire : synthèse, caractérisation et réactivité de complexes basés sur des calix[6]arènes." Paris 11, 2004. http://www.theses.fr/2004PA112216.
Full textThe goal of this thesis is to develop a biomimetic mononuclear system of copper monoxygenases based on a calix[6]arene structure. The behavior of cupric complexes, from a first generation of ligands, has been studied in oxidation catalysis. This study showed thatthese complexes have a catalytical activity. However, they did not present a particular selectivity towards the substrates tested. A structural study showed that these ligands could stabilize complexes of various nuclearities in presence of bridging anionic ligands. The cause of the structural diversity is due to the important flexibility of the ligands associated with the repulsion between the anionic ligands and the calixarene. In order to protect the cupric ion from the external medium, a new generation of ligands named calixcryptands has been developed. The copper complexes of 3 calixcryptand ligands presenting different electronic properties were studied. The cupric complexes are excellent receptors of neutral molecules. The metal ion of each complex is maintained in very similar geometries. The cuprous complexes resulting from these ligands presented very singular behaviors especially concerning their affinity for various guest molecules (nitriles, CO, O2). A cuprous complex had a very inusual reactivity towards dioxygen. This reaction led to the decoordinationof the superoxide and the oxidation of the ligand. This preliminary study constitutes the first example of oxygen activation by a single copper center
Brigui, Frédéric. "Algorithmes d'imagerie SAR polarimétriques basés sur des modèles à sous-espace : application à la détection de cible sous couvert forestier." Paris 10, 2010. http://www.theses.fr/2010PA100171.
Full textThis thesis deals with the development of new SAR processors for detection of targats under foliage. For this application, classical SAR images are very noisy; the target has a low response and there are a lot of false alarms due to the interferences which are mainly the trunks of the forest. To detect target and reduce interferences, we reconsider the way processing a SAR image by using models which take into account the scattering properties of the target and the interferences. These models are developed for signals in sigle polarisation and in double polarisation (HH and VV) and are defined with low-rank subspaces generated from the responses of canonical elements. From this modeling, we develop new SAR processors. First, we focus on the detection of the target and on the reduction of false alarms. The performances of these processors in terms of detection and false alarms reduction are evaluated with realistic simulated data and we emphasize on the interest of using a polimetric information. We also apply these algorithms to real data that allows us to analyze the images in real-life context
Charhad, Mbarek. "Modèles de documents vidéos basés sur le formalisme des graphes conceptuels pour l'indexation et la recherche par le contenu sémantique." Université Joseph Fourier (Grenoble), 2005. http://www.theses.fr/2005GRE10186.
Full textLn the case of video, there are a number of specificities due to its multimedia aspect. For instance, a given concept (person, object. . . ) can be present in different ways: it can be seen, it can be heard, it can be talked of, and combinations ofthese representations can also occur. Of course, these distinctions are important for the user. Queries involving a concept C as: "Show me a picture of C" or as "I want to know what C2 has said about C" are likely to give quite different answers. The first one would look for C in the image track while the second would look in the audio track for a segment in which C is the speaker and C is mentioned in the speech. The context of this study is multimedia information modelling, indexing and retrieval. At the theoretical level, our contribution consists in the proposal of a model for the representation of the semantic contents of video documents. This model permits the synthetic and integrated taking into account of, data elements from each media (image, text, audio). The instantiation of this model is implemented using the conceptual graph (CG) formalism. The choice of this formalism is justified by its expressivity and its adequacy with content-based information indexing and retrieval. Our experimental contribution consists in the (partial) implementation of the CLOVIS prototype. We have integrated the proposed model in the video indexing and retrieval system by content in order to evaluate its contributions in terms of effectiveness and precision
Soulier, Laure. "Définition et évaluation de modèles de recherche d’information collaborative basés sur les compétences de domaine et les rôles des utilisateurs." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2735/.
Full textThe research topic of this document deals with a particular setting of information retrieval (IR), referred to as collaborative information retrieval (CIR), in which a set of multiple collaborators share the same information need. Collaboration is particularly used in case of complex tasks in which an individual user may have insufficient knowledge and may benefit from the expertise/knowledge or complementarity of other collaborators. This multi-user context rises several challenges in terms of search interfaces as well as ranking models, since new paradigms must be considered, namely division of labor, sharing of knowledge and awareness. These paradigms aim at avoiding redundancy between collaborators in order to reach a synergic effect within the collaboration process. Several approaches have been proposed in the literature. First, search interfaces have been oriented towards a user mediation in order to support collaborators' actions through information storage or communication tools. Second, more close to our contributions, previous work focus on the information access issue by designing ranking models adapted to collaborative environments dealing with the challenges of (1) personalizing result set to collaborators, (2) favoring the sharing of knowledge, (3) dividing the labor among collaborators and/or (4) considering particular roles of collaborators within the information seeking process. In this thesis, we focus, more particularly, on two main aspects of the collaboration: - The expertise of collaborators by proposing retrieval models adapted to the domain expertise level of collaborators. The expertise levels might be vertical, in the case of domain expert and novice, or horizontal when collaborators have different subdomain expertise. We, therefore, propose two CIR models on two steps including a document relevance scoring with respect to each role and a document allocation to user roles through the Expectation-Maximization (EM) learning method applied on the document relevance scoring in order to assign documents to the most likely suited user. - The complementarity of collaborators throughout the information seeking process by mining their roles on the assumptions that collaborators might be different and complementary in some skills. We propose two algorithms based either on predefined roles or latent roles which (1) learns about the roles of the collaborators using various search-related features for each individual involved in the search session, and (2) adapts the document ranking to the mined roles of collaborators
Gaudreault, Mathieu. "Modèles d’identification de tissu basés sur des images acquises avec un tomodensitomètre à double énergie pour des photons à faible énergie en curiethérapie." Master's thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25366.
Full textClinical Dual-Energy Computed Tomography (DECT) images provide the determination of the effective atomic number and the electronic density. The purpose of this study is to develop a new assessment model of tissues, named the reduced three elements tissue model, for dose calculations from DECT images in brachytherapy and compare it to a known identification method, assignment through the Mahalanobis distance. Both models are applied to DECT scans of the Gammex RMI 467 phantom and for a subset of 10 human tissues. Dose distributions are calculated from Monte Carlo simulations with a point source having the energy spectrum of 125I. The reduced three elements tissue model provides dose equivalence to reference tissues and is equivalent to the calculation of the Mahalanobis distance. The model constructed can be used as a scheme to assess tissues from DECT images for dose calculation.
Rumeau, Pierre. "Etude par l'évaluation et l'analyse de risque des possibilités de mise en production de services basés sur les HIS." Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00825349.
Full textDelorme-Costil, Alexandra. "Modèles prédictifs et adaptatifs pour la gestion énergétique du bâtiment résidentiel individuel : réseaux de neurones artificiels basés sur les données usuellement disponibles." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2018. http://www.theses.fr/2018EMAC0020.
Full textThe use of predictive control permits to reduce the energy consumption of residential buildings without reducing the comfort of the inhabitant. The company BoostHeat develops a thermodynamic furnace with high energy efficiency for this purpose. Simultaneous production of domestic hot water (DHW) and heating allows many control strategies to optimize performance. The use of predictive controls makes it possible to anticipate energy needs, to take into account the impact of building inertia on indoor temperature and thus to make production management choices that minimize energy consumption. The models used today in predictive controls are binding. Indeed, these models require large amounts of data, either on a representative sample of buildings or on each modeled building. They may also need detailed studies of the building, the occupants and their consumption practices. In order to allow BoostHeat to use predictive control without going through a complex modeling step at every furnace installation, we propose adaptive models using information commonly available on a typical installation. We choose to develop artificial neural networks for the prediction on the one hand of the consumptions of DHW and on the other hand of the ambiant temperature of the building. Artificial neural networks are already used to model the energy consumption of a specific building, however our models are generic and automatically adapt to the building in which the furnace is installed. Many models are developed to study the impact of the choice of inputs, amounts of learning data and artificial neural network architecture on the accuracy of prediction. The DHW consumption prediction models are tested on three experimental cases while the indoor temperature prediction models are tested on two experimental cases and one hundred and twenty simulated cases. This makes it possible to test their adaptation to the entire French housing stock. We show, for the prediction of DHW consumption as for the indoor temperature prediction, that two weeks of collected data are sufficient for a good adaptation of the models to a specific case. The most efficient model for the prediction of domestic hot water consumption only needs the consumptions of the previous instants. The indoor temperature prediction model performs better on less isolated buildings. The results obtained are promising for the application of predictive controls on a large scale
Castonguay-Lebel, Zoé. "La modulation de l'activité transcriptionnelle des enképhalines suivant l'exposition répétée à un agent stresseur." Master's thesis, Université Laval, 2008. http://hdl.handle.net/20.500.11794/20157.
Full textLai, Chiara. "Le rapport espace/activité au cœur du processus d'appropriation des nouveaux espaces de travail : du flex office aux environnements de travail basés sur l'activité (activity-based workspaces)." Electronic Thesis or Diss., Paris, HESAM, 2023. http://www.theses.fr/2023HESAC048.
Full textThis Cifre thesis is part of a research-intervention conducted within a real estate consulting firm. It focuses on new "flexible" workspaces (flex-office, dynamic work environment, etc.), which we refer to as activity-based workspaces (ABW). These configurations cover a single generic spatial organization: shared, unallocated workstations, arranged within open platforms and providing workers with a range of spatial resources that can be mobilized by individuals and collectives to support their activity. These workspaces are thus based on the promise of a fitting relationship between space and activity, while at the same time conveying a prescriptive vision of work and how it should be done.Based on a cultural-historical perspective on activity, we mobilize the approaches of activity theory (Clot, 1999; Engeström, 2001) and situated cognition (Lave, 1988) to approach appropriation as a movement of (re)construction of the subject's activity, anchored in the situation in which he or she acts, which is historically constituted by his or her action. From this perspective, how is the process of appropriation of these new workspaces by their users impacted? How does the tension between space and activity, inherent in the operating principles of these new forms of office (Ianeva, et al., 2021), shape or even challenge this process of appropriation? How, then, can we design spaces that can constitute relevant resources for subjects and their activity?Using a comprehensive, qualitative approach, this thesis first looks at the ABW design process, and explores how designers apprehend and integrate the space/activity relationship in the construction of the spatial proposition and its operating mechanisms (first empirical study). It then focuses on ABW users, and how (i) they mobilize these spatial solutions in the course of their activity, and how (ii) these spatial solutions redefine the contours of their actions through the prism of the situated acceptance model (Bobillier Chaumon, 2016) (second empirical study). Finally, we present the construction process of a design tool based on the simulation method (Van Belleghem, 2021). Its aim is to investigate transformations in the practices and representations of designers and users in relation to ABWs, enabling the discourse that accompanies these ABWs to be anchored in the space/activity relationship (third empirical study).Our results highlight (i) the way in which the designers of these new workspaces integrate the relationship between space and activity as an object of work when thinking about future spaces; and (ii) the way in which this relationship between space and activity is grasped and reshaped by end-users in work situations. The appropriation process of these workspaces is thus to understand within a dialectical movement, in which actions to transform space redefine the contours of cognition and action.Understanding appropriation as a process of (re)articulating the relationship between space and activity, which is anchored at the heart of subjects' work situations, may therefore prove to be an effective and operative tool for work and interveners psychologists involved in supporting transformations of the physical and temporal work settings
Vollant, Antoine. "Evaluation et développement de modèles sous-maille pour la simulation des grandes échelles du mélange turbulent basés sur l'estimation optimale et l'apprentissage supervisé." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI118/document.
Full textThis work develops subgrid model techniques and proposes methods of diagnosis for Large Eddy Simulation (LES) of turbulent mixing.Several models from these strategies are thus presented to illustrate these methods.The principle of LES is to solve the largest scales of the turbulent flow responsible for major transfers and to model the action of small scales of flowon the resolved scales. Formally, this operation leads to filter equations describing turbulent mixing. Subgrid terms then appear and must bemodeled to close the equations. In this work, we rely on the classification of subgrid models into two categories. "Functional" models whichreproduces the energy transfers between the resolved scales and modeled scales and "Structural" models that seek to reproduce the exact subgrid termitself. The first major challenge is to evaluate the performance of subgrid models taking into account their functional behavior (ability to reproduce theenergy transfers) and structural behaviour (ability to reproduce the term subgrid exactly). Diagnostics of subgrid models have been enabled with theuse of the optimal estimator theory which allows the potential of structural improvement of the model to be evaluated.These methods were initially involved for the development of a first family of models called algebraic subgrid $DRGM$ for "Dynamic Regularized GradientModel". This family of models is based on the structural diagnostic of terms given by the regularization of the gradient model family.According to the tests performed, this new structural model's family has better functional and structural performance than original model's family of thegradient. The improved functional performance is due to the vanishing of inverse energy transfer (backscatter) observed in models of thegradient family. This allows the removal of the unstable behavior typically observed for this family of models.In this work, we then propose the use of the optimal estimator directly as a subgrid scale model. Since the optimal estimator provides the modelwith the best structural performance for a given set of variables, we looked for the set of variables which optimize that performance. Since this set of variablesis large, we use surrogate functions of artificial neural networks type to estimate the optimal estimator. This leads to the "Artificial Neural Network Model"(ANNM). These alternative functions are built from databases in order to emulate the exact terms needed to determine the optimal estimator. The tests of this modelshow that he it has very good performance for simulation configurations not very far from its database used for learning, so these findings may fail thetest of universality.To overcome this difficulty, we propose a hybrid method using an algebraic model and a surrogate model based on artificial neural networks. Thebasis of this new model family $ACM$ for "Adaptive Coefficient Model" is based on vector and tensor decomposition of the exact subgrid terms. Thesedecompositions require the calculation of dynamic coefficients which are modeled by artificial neural networks. These networks have a learning method designedto directlyoptimize the structural and functional performances of $ACM$. These hybrids models combine the universality of algebraic model with high performance butvery specialized performance of surrogate models. The result give models which are more universal than ANNM
Sebai, Samia. "L'impact des fluctuations du prix du pétrole sur l'activité économique : application aux pays de la région MENA." Caen, 2016. http://www.theses.fr/2016CAEN0503.
Full textThe influence of oil price fluctuations has been the subject of several empirical studies, however there is no consensus on the transmission mechanisms in the economy. The objective of this thesis is to identify the impact of oil price changes on the real economy and on the stock markets of the MENA region countries. For this purpose, we firstly implement causality and cointegration tests to analyze the interactions between energy prices and macroeconomic variables respectively on the short and long term. Moreover, by dissociating the effects of increases and decreases in oil prices, we test the potential for asymmetric transmission of shocks on economic variables regarding exporting and importing MENA countries. In a second time, we analyze the interdependencies between the stock market and oil market through a VAR (1)-GARCH (1,1) model estimation with constant conditional correlation which takes into account the impact of shocks transmission on yields and volatilities of both markets. We conduct both an aggregated and sectorial analysis to analyze the impact of changes in energy prices on domestic stock index and identify the most vulnerable economic sectors. Finally, the analysis of correlations between the stock market and oil market provide useful information on portfolio diversification in the presence of the oil price fluctuations’ risk
Azouaou, Faiçal. "Modèles et outils d'annotations pour une mémoire personnelle de l'enseignant." Université Joseph Fourier (Grenoble), 2006. http://tel.archives-ouvertes.fr/tel-00118602.
Full textWithin the technology enhanced learning research area, this thesis aims at defining and proposing to teachers a computerized memory as a personal knowledge management tool. This memory is based on the annotations that he has made on his pedagogical documents. The proposed memory extends the teacher's cognitive capacities by assisting him unobtrusively in the management of his knowledge which is necessary for the realization of his activities. By taking into account the teaching activity specificity (implied knowledge, context of the activity) in the memory models, it enables us to obtain both a teacher's dedicated memory and a teaching context-aware memory. Two versions of the tool were developed: a portable version and a web version (implemented by the company Pentila) which can be integrated into an LCMS
Ballutaud, Marine. "L'utilisation d'un cadre de travail mécaniste pour améliorer les outils basés sur les isotopes stables en écologie trophique." Electronic Thesis or Diss., Université de Lille (2022-....), 2022. http://www.theses.fr/2022ULILR058.
Full textThe impact of global change on marine ecosystems is unprecedented. In order to preserve ecosystems, trophic metrics are used as indicators of their functioning. Trophic interactions are easily deduced from stable isotopes by inference. However, isotopic inferences are based on the assumption of isotopic equilibrium. This PhD thesis work aims to highlight the need to overcome this assumption by developing dynamic isotope models, thanks to the isotope turnover rate. The approach developed is to build a mechanistic framework via a dynamic model, to create a virtual experiment that allows us to assess the inferences. First, the development of a dynamic mixing model improved the individualdiet estimates, which are biased by 50 % with a static snapshot approach. This bias in diet estimates decreases to 15 % once λ is taken into account, with a static approach integrating isotopic values. For an unbiased and dynamic estimation, the application of the dynamic mixing model is required with an accurate and dynamic estimation of λ. Secondly, the implementation of isotope dynamics to ecosystem models allowed to confirm at the community level, that nitrogen does reflect the average food web structure in a case of opportunistic predation. However, the difference of one trophic level was observed for some top predators, between the estimates by diet matrices and those by nitrogen.The integration of isotope dynamics mechanisms into inferences is a major advance, since it modifies our insight into of trophic interactions in marine ecosystems
Poirier, Canelle. "Modèles statistiques pour les systèmes d'aide à la décision basés sur la réutilisation des données massives en santé : application à la surveillance syndromique en santé publique." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1B019.
Full textOver the past few years, the Big Data concept has been widely developed. In order to analyse and explore all this data, it was necessary to develop new methods and technologies. Today, Big Data also exists in the health sector. Hospitals in particular are involved in data production through the adoption of electronic health records. The objective of this thesis was to develop statistical methods reusing these data in order to participate in syndromic surveillance and to provide decision-making support. This study has 4 major axes. First, we showed that hospital Big Data were highly correlated with signals from traditional surveillance networks. Secondly, we showed that hospital data allowed to obtain more accurate estimates in real time than web data, and SVM and Elastic Net models had similar performances. Then, we applied methods developed in United States reusing hospital data, web data (Google and Twitter) and climatic data to predict influenza incidence rates for all French regions up to 2 weeks. Finally, methods developed were applied to the 3-week forecast for cases of gastroenteritis at the national, regional and hospital levels
Gueuder, Maxime. "Quatre essais sur les inégalités et l'instabilité macroéconomique." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0534.
Full textThis PhD dissertation focuses on wealth and wage inequality, and the macro-economy. In a first chapter, I write and run a small macro agent-based model (M-ABM) in which I study the resultant distribution of wealth among households. I show that this model generates fat- tailed distributions of wealth in the household sector, as empirically observed in advanced economies. In a second chapter, I extend this model to study the macroeconomics of interpersonal and institutional discriminations against racial minorities. When discrimination is at work, racial disparities in income and wealth arise. The effect of the abolition of institutional discrimination is path-dependant: the more the economy is organized when this institutional change occurs, the more time it takes to get back to the counter-factual situation where no institutional discrimination was set up in the first place. In a third chapter, I study the evolution of the difference of median log-annual earnings between Blacks and Whites in the US between 1960 and 2015, focusing on the 2008 crisis. I control for selection arising from racial differentials in institutionalised population, and find that the unconditional racial wage gap attains a maximum in 2012. Controlling for age and education, I obtain the same result. Using unconditional quantile regressions, I show that the unexplained part of the unconditional racial wage gap has not increased during the crisis. Finally, in an afterword, I use metadata from RePEC to show that the share of economics papers published in the 13 of the 30 "top" journals containing "crisis" in their titles and/or abstracts has significantly increased in 2008
Métral, Jérôme. "Modélisation et simulation numérique de l'écoulement d'un plasma atmosphérique pour l'étude de l'activité électrique des plasmas sur avion." Châtenay-Malabry, Ecole centrale de Paris, 2002. http://www.theses.fr/2002ECAP0868.
Full textA ionized gas (or plasma) has the ability of absorbing or reflecting electromagnetic (radar) waves if its ionization rate is high enough. This is particularly interesting for aeronautics. This study aims at predicting the electric and energetic characteristics of a weakly ionized air plasma in an atmospheric pressure flow. The plasma is described by a two-temperature model, coming from the non-equilibrium description of plasmas. Plasma flow is then described by a two-temperature hydrodynamic system coupled with a collisional model (energy exchanges rates) and a kinetic model (chemical reactions). An algorithm was built to simulate plasma flow in axisymetric geometry. The algorithm is a 2D Lagrange + Projection scheme. The projection step was adapted to multi-components advection, using a second order, non oscillating, and bidimensionnal scheme. This algorithm allows the simulation of experiments concerning atmospheric pressure plasma and then the validation of the model parameters. In a second part, we study the Perfectly Matched Layer (PML) which is a boundary condition to simulate wave propagation in open domains. This method is particularly efficient for electromagnetic problems, and we want to enlarge this approach to aeroacoutics problems (linearized Euler equations). We propose two solutions: a practical approach to avoid numerical oscillations of the solution and a more general approach which consists in a new absorbing layer formulation which leads to well-posed problems
Rocha, Loures de Freitas Eduardo. "Surveillance et diagnostic des phases transitoires des systèmes hybrides basés sur l'abstraction des dynamiques continues par réseau de Pétri temporel flou." Toulouse 3, 2005. http://www.theses.fr/2006TOU30022.
Full textThe supervision and monitoring systems have a major role to the security of an industrial plant and the availability of its equipments. Forewarn the operator earliest about the deviations of the process nominal behaviour is fundamental to carry out preventive and corrective actions. Some kind of industrial plants like the chemical process and batch systems set significant complexities to the monitoring and supervision systems because of their hybrid nature (discrete and continuous interactions), the number of active variables and the complexity of theirs behaviour relations. This complexity becomes more pronounced in systems characterized by numerous operating mode changes leading to a numerous transitory phases. The monitoring of these transitory phases is a delicate issue. The large amount of variables to be taken into account leads to a difficult reasoning and interpretation of the process behaviour. In case of fault, the diagnosis becomes then a complex task. The marginal deviations of process behaviour may indicate a dysfunction that degenerates slowly or be caused by operator misbehaviour or piloting system fault. To cope with this complexity we propose a hierarchical monitoring and control system completed by a procedural decomposition proposed by the ISA88 norm, defining from the upper level the recipes, procedures, operations, phases and tasks. Our monitoring and diagnosis approach is located at two high levels of this procedural hierarchy: i) at the operation level, particularly on the transition period of operating modes where the influence relations between the variables are weekly or not known, ii) at the phase level where the influence relations are known in a period of time belonging the transition operating mode horizon. .
Nicolazzi, Céline. "Vectorisation du ganciclovir par des cyclodextrines : études des complexes supramoléculaires et de l'activité antivirale sur des modèles cellulaires et murin infectés par un cytomegalovirus." Nancy 1, 2000. http://www.theses.fr/2000NAN12015.
Full textThe antiviral chemotherapy has strongly grown in the last decades and new drugs are currently available for the treatment of viral diseases. Nevertheless, problems such as toxicity of treatments and emergence of resistant viral strains still occur. In the case of Human Immunoodeficiency Virus and herpes virus, including human cytomegalovirus (HCMV), infections, problems such as chronic treatment add. In arder to limit these drawbacks, we studied vectors able to carry antiviral molecules and thus to improve their pharmacological properties. The vectors used were cyclodextrins (Cds) which are neutral saccharides forming a cage in which a wide range of drugs can be encapsulated and hence have their absorption and therapeutic efficiency improved. We studied through Nuclear Magnetic Resonance and spectrometry the ability of three native Cds, [alpha]-,[beta]- and [gamma]-. Cds, to form inclusion complexes with ganciclovir (GCV), an anti-HCMV agent. We tested the influence of these Cds on the in vitro antiviral activity of GCV against various HCMV strains. Only [beta]-Cd was able to form a complex with GCV and we demonstrated that it could improve the in vitro antiviral activity, of GCV. In order to test the influence of complexation of anti-HCMV molecules on their in vivo antiviral activity, we developed a murine model of cytomegalovirus infection. We then determined the experimental conditions of the infection and evaluatecl the infectiosity of the virus in mouse organs according ta time. In this in vivo model, we carried out preliminary studies on the antiviral activity of complexed GCV with [beta]-Cd, which revealed an improvement of its efficiency