Tesi sul tema "Corrections basées sur des modèles"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-39 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Corrections basées sur des modèles".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Kady, Charbel. "Managing Business Process Continuity and Integrity Using Pattern-Based Corrections". Electronic Thesis or Diss., IMT Mines Alès, 2024. http://www.theses.fr/2024EMAL0014.
Testo completoThis thesis presents an approach to managing deviations in Business Process Model and Notation (BPMN) workflows. The research addresses the critical need for effective deviation management by integrating a comprehensive framework that includes pattern-based deviation correction and an enriched State Token mechanism. The approach is tested through a case study in the apiculture domain, demonstrating the practical applicability and effectiveness of the proposed method. Key contributions include the development of a library of patterns, the characterization of BPMN elements, and a mechanism to help decision-making in addressing deviations. The results show that the approach can efficiently correct deviations, ensuring workflow continuity and integrity
Peltier, Mikaël. "Techniques de transformation de modèles basées sur la méta-modélisation". Nantes, 2003. http://www.theses.fr/2003NANT2057.
Testo completoThe last modern requirements of software development have caused OMG to define an architecture driven by models that are based on modelling and meta-modelling technologies. This evolution has caused a new problem: the transformation of models. Interest in this activity growing from the industry has ended in the elaboration of a request for proposal in April 2002 whose objective will be to standardise a transformation system. In this document, we are concretely and pragmatically studying the model transformation. After having presented the existing systems, we are orientated towards usage of a domain-specific language to allowing you to express the transformations in a formal, non-ambiguous and executable way. The proposed language can constitute the heart of a unified model transformation system. The results of the normalisation works allow us to confirm the proposed choices and possible suggest extensions or improvements
Belbachir, Faiza. "Approches basées sur les modèles de langue pour la recherche d'opinions". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2341/.
Testo completoEvolution of the World Wide Web has brought us various forms of data like factual data, product reviews, arguments, discussions, news data, temporal data, blog data etc. The blogs are considered to be the best way for the expression of the one's opinions about something including from a political subject to a product. These opinions become more important when they influence govt. Policies or companies marketing agendas and much more because of their huge presence on the web. Therefore, it becomes equally important to have such information systems that could process this kind of information on the web. In this thesis, we propose approach (es) that distinguish between factual and opinion documents with the purpose of further processing of opinionated information. Most of the current opinion finding approaches, some base themselves on lexicons of subjective terms while others exploit machine learning techniques. Within the framework of this thesis, we are interested in both types of approaches by mitigating some of their limits. Our contribution revolves around three main aspects of opinion mining. First of all we propose a lexical approach for opinion finding task. We exploit various subjective publicly available resources such as IMDB, ROTTEN, CHESLY and MPQA that are considered to be opinionated data collections. The idea is that if a document is similar to these, it is most likely that it is an opinionated document. We seek support of language modeling techniques for this purpose. We model the test document (i. E. The document whose subjectivity is to be evaluated) the source of opinion by language modeling technique and measure the similarity between both models. The higher the score of similarity is, the more the document subjective. Our second contribution of detection of opinion is based on the machine learning. For that purpose, we propose and evaluate various features such as the emotivity, the subjectivity, the addressing, the reflexivity and report results to compare them with current approaches. Our third contribution concerns the polarity of the opinion which determines if a subjective document has a positive or negative opinion on a given topic. We conclude that the polarity of a term can depend on the domain in which it is being used
Albarello, Nicolas. "Etudes comparatives basées sur les modèles en phase de conception d'architectures de systèmes". Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00879858.
Testo completoAllouche, Benyamine. "Modélisation et commande des robots : nouvelles approches basées sur les modèles Takagi-Sugeno". Thesis, Valenciennes, 2016. http://www.theses.fr/2016VALE0021/document.
Testo completoEvery year more than 5 million people worldwide become hemiplegic as a direct consequence of stroke. This neurological deficiency, often leads to a partial or a total loss of standing up abilities and /or ambulation skills. In order to propose new supporting solutions lying between the wheelchair and the walker, this thesis comes within the ANR TECSAN project named VHIPOD “self-balanced transporter for disabled persons with sit-to-stand function”. In this context, this research provides some answers for two key issues of the project : the sit-to-stand assistance (STS) of hemiplegic people and their mobility through a two wheeled self-balanced solution. These issues are addressed from a robotic point of view while focusing on a key question : are we able to extend the use of Takagi-Sugeno approach (TS) to the control of complex systems ? Firstly, the issue of mobility of disabled persons was treated on the basis of a self-balanced solution. Control laws based on the standard and descriptor TS approaches have been proposed for the stabilization of gyropod in particular situations such as moving along a slope or crossing small steps. The results have led to the design a two-wheeled transporter which is potentially able to deal with the steps. On the other hand, these results have also highlighted the main challenge related to the use of TS approach such as the conservatisms of the LMIs constraints (Linear Matrix Inequalities). In a second time, a test bench for the STS assistance based on parallel kinematic manipulator (PKM) was designed. This kind of manipulator characterized by several closed kinematic chains often presents a complex dynamical model (given as a set of ordinary differential equations, ODEs). The application of control laws based on the TS approach is often doomed to failure given the large number of non-linear terms in the model. To overcome this problem, a new modeling approach was proposed. From a particular set of coordinates, the principle of virtual power was used to generate a dynamical model based on the differential algebraic equations (DAEs). This approach leads to a quasi-LPV model where the only varying parameters are the Lagrange multipliers derived from the constraint equations of the DAE model. The results were validated on simulation through a 2-DOF (degrees of freedom) parallel robot (Biglide) and a 3-DOF manipulator (Triglide) designed for the STS assistance
Coq, Guilhem. "Utilisation d'approches probabilistes basées sur les critères entropiques pour la recherche d'information sur supports multimédia". Phd thesis, Université de Poitiers, 2008. http://tel.archives-ouvertes.fr/tel-00367568.
Testo completoLa principale motivation de ce travail de thèse est de justifier l'utilisation d'un tel critère face à un problème de sélection de modèles typiquement issu d'un contexte de traitement du signal. La justification attendue se doit, elle, d'avoir un solide fondement mathématique.
Nous abordons ainsi le problème classique de la détermination de l'ordre d'une autorégression. La régression gaussienne, permettant de détecter les harmoniques principales d'un signal bruité, est également abordée. Pour ces problèmes, nous donnons un critère dont l'utilisation est justifiée par la minimisation du coût résultant de l'estimation obtenue. Les chaînes de Markov multiples modélisent la plupart des signaux discrets, comme les séquences de lettres ou les niveaux de gris d'une image. Nous nous intéressons au problème de la détermination de l'ordre d'une telle chaîne. Dans la continuité de ce problème nous considérons celui, a priori éloigné, de l'estimation d'une densité par un histogramme. Dans ces deux domaines, nous justifions l'utilisation d'un critère par des notions de codage auxquelles nous appliquons une forme simple du principe de Minimum Description Length.
Nous nous efforçons également, à travers ces différents domaines d'application, de présenter des méthodes alternatives d'utilisation des critères d'information. Ces méthodes, dites comparatives, présentent une complexité d'utilisation moindre que les méthodes rencontrées habituellement, tout en permettant une description précise du modèle.
Kucerova, Anna. "Identification des paramètres des modèles mécaniques non-linéaires en utilisant des méthodes basées sur intelligence artificielle". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2007. http://tel.archives-ouvertes.fr/tel-00256025.
Testo completoMuroor, Nadumane Ajay Krishna. "Modèles et vérification pour la composition et la reconfiguration d'applications basées sur le web des objets". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM067.
Testo completoThe Internet of Things (IoT) applications are built by interconnecting everyday objects over a network. These objects or devices sense the environment around them, and their network capabilities allow them to communicate with other objects to perform utilitarian tasks. One of the popular ways to build IoT applications in the consumer domain is by combining different objects using Event-Condition-Action (ECA) rules. These rules are typically in the form of IF something-happens THEN do-something. The Web of Things (WoT) are a set of standards and principles that integrate architectural styles and capabilities of web to the IoT. Even though WoT architecture coupled with ECA rules simplifies the building of IoT applications to a large extent, there are still challenges in making end-users develop advanced applications in a simple yet correct fashion due to dynamic, reactive and heterogeneous nature of IoT systems.The broad objective of this work is to leverage formal methods to provide end-users of IoT applications certain level of guarantee at design time that the designed application will behave as intended upon deployment. In this context, we propose a formal development framework based on the WoT. The objects are described using a behavioural model derived from the Thing Description specification of WoT. Then, the applications are designed not only by specifying individual ECA rules, but also by composing these rules using a composition language. The language enables users to build more expressive automation scenarios. The description of the objects and their composition are encoded in a formal specification from which the complete behaviour of the application is identified. In order to guarantee correct design of the application, this work proposes a set of generic and application-specific properties that can be validated on the complete behaviour before deployment. Further, the deployed applications may be reconfigured during their application lifecycle. The work supports reconfiguration by specifying reconfiguration properties that allow one to qualitatively compare the behaviour of the new configuration with the original configuration. The implementation of all the proposals is achieved by extending Mozilla WebThings platform. A new set of user interfaces are built to support the composition of rules and reconfiguration. A model transformation component which transforms WoT models to formal models and an integration with formal verification toolbox are implemented to enable automation. Finally, a deployment engine is built by extending WebThings APIs. It directs the deployment of applications and reconfigurations respecting their composition semantics
Ahmad, Alexandre. "Animation de structures déformables et modélisation des interactions avec un fluide basées sur des modèles physiques". Limoges, 2007. https://aurore.unilim.fr/theses/nxfile/default/4f73d6f8-b8f0-4794-924b-8f827db44689/blobholder:0/2007LIMO4046.pdf.
Testo completoThe presented works' main focus is the interaction of liquids and thin shells, such as sheets of paper, fish fins and even clothes. Even though such interactions is an every day scenario, few research work in the computer graphics community have investigated this phenomenon. Thereby, I propose an algorithm which resolves contacts between Lagrangian fluids and deformable thin shells. Visual artefacts may appear during the surface extraction procedure due to the proximity of the fluids and the shells. Thus, to avoid such artefacts, I propose a visibility algorithm which projects the undesired overlapping volume of liquid onto the thin shells' surface. In addition, an intuitive parametrisation model for the definition of heterogeneous friction coefficients on a surface is presented. I also propose two optimisation methods. The first one reduces the well-known dependency of numerical stability and the timestep when using explicit schemes by filtering particles' velocities. This reduction is quantified with the use of frequency analysis. The second optimisation method is a unified dynamic spatial acceleration model, composed of a hierarchical hash table data structure, that speeds up the particle neighbourhood query and the collision broad phase. The proposed unified model is besides used to efficiently prune unnecessary computations during the surface extraction procedure
Ksontini, Mohamed. "Mise en oeuvre de lois de commande basées sur des multi-modèles continus de type Takagi-Sugeno". Valenciennes, 2005. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/010633b8-054b-44a7-a83b-2bf46a1dfd68.
Testo completoIn this document we dealt with the stabilization of the multi models approaches. We tried to make a link between the adaptive multi models approach and the TS approach. A new control architecture for the adaptive approach has been proposed allowing the stabilisation of the whole closed loop. A comparison about the robustness for the classical inverted pendulum shows the features of this approach. In the second part, we proposed new stabilisation conditions of TS fuzzy models allowing to reduce the number of LMI and or to reduce the conservatism of the conditions. With such conditions, we get some easy LMI conditions for systems having an important number of rules, and we also improve results given in the literature. These results are based mainly on the use of the elimination lemma. A first illustration of these conditions is carried out on a TORA system. Finally the application of these conditions on a mixture liquids system that will be useful in dimensioning a project of prototype has given good results
Elleuch, Wajdi. "Mobilité des sessions dans les communications multimédias en mode-conférence basées sur le protocole SIP". Thèse, Université de Sherbrooke, 2011. http://hdl.handle.net/11143/5799.
Testo completoLeveau, Valentin. "Représentations d'images basées sur un principe de voisins partagés pour la classification fine". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT257/document.
Testo completoThis thesis focuses on the issue of fine-grained classification which is a particular classification task where classes may be visually distinguishable only from subtle localized details and where background often acts as a source of noise. This work is mainly motivated by the need to devise finer image representations to address such fine-grained classification tasks by encoding enough localized discriminant information such as spatial arrangement of local features.To this aim, the main research line we investigate in this work relies on spatially localized similarities between images computed thanks to efficient approximate nearest neighbor search techniques and localized parametric geometry. The main originality of our approach is to embed such spatially consistent localized similarities into a high-dimensional global image representation that preserves the spatial arrangement of the fine-grained visual patterns (contrary to traditional encoding methods such as BoW, Fisher or VLAD Vectors). In a nutshell, this is done by considering all raw patches of the training set as a large visual vocabulary and by explicitly encoding their similarity to the query image. In more details:The first contribution proposed in this work is a classification scheme based on a spatially consistent k-nn classifier that relies on pooling similarity scores between local features of the query and those of the similar retrieved images in the vocabulary set. As this set can be composed of a lot of local descriptors, we propose to scale up our approach by using approximate k-nearest neighbors search methods. Then, the main contribution of this work is a new aggregation-based explicit embedding derived from a newly introduced match kernel based on shared nearest neighbors of localized feature vectors combined with local geometric constraints. The originality of this new similarity-based representation space is that it directly integrates spatially localized geometric information in the aggregation process.Finally, as a third contribution, we proposed a strategy to drastically reduce, by up to two orders of magnitude, the high-dimensionality of the previously introduced over-complete image representation while still providing competitive image classification performance.We validated our approaches by conducting a series of experiments on several classification tasks involving rigid objects such as FlickrsLogos32 or Vehicles29 but also on tasks involving finer visual knowledge such as FGVC-Aircrafts, Oxford-Flower102 or CUB-Birds200. We also demonstrated significant results on fine-grained audio classification tasks such as the LifeCLEF 2015 bird species identification challenge by proposing a temporal extension of our image representation. Finally, we notably showed that our dimensionality reduction technique used on top of our representation resulted in highly interpretable visual vocabulary composed of the most representative image regions for different visual concepts of the training base
Cragnolini, Tristan. "Prédire la structure des ARN et la flexibilité des ARN par des simulations basées sur des modèles gros grains". Paris 7, 2014. http://www.theses.fr/2014PA077163.
Testo completoIn contrast to proteins, there are relatively few experimental and computational studies on RNAs. This is likely to change, however, due to the discovery that RNA molecules fulfil a considerable diversity of biological tasks, including, aside from its encoding and translational activity, also enzymatic and regulatory functions. Despite the simplicity of its four-letter alphabet, RNAs are able to fold into a wide variety of tertiary structures where dynamic conformational ensembles appear also to be essential for understanding their functions. In spite of constant experimental efforts and theoretical developments, the gap between sequences and 3D structures is increasing, and our lmowledge of RNA flexibility is still limited at an atomic level of detail. In this thesis, I present improvements to the HiRE-RNA model, and folding simulations that were performed with it. After presenting the computational methods used to sample the energy landscapes of RNA and the experimental methods providing information about RNA structures, I present the RNA topologies and the structural data I used to improve the model, and to study RNA folding. The improvements of HiRE-RNA in version 2 and version 3 are then described, as well as the simulations performed with each version of the model
Alwakil, Ahmed Diaaeldin. "Illusions thermiques basées sur les métamatériaux et les métasurfaces : conduction et rayonnement". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0209/document.
Testo completoMimetism, camouflage or invisibility have motivated numerous efforts in the last decade, which are now extended with metasurfaces. This PhD work fits this international context and was first focused on inverse problems in heat conduction before we address thermal radiation and metasurfaces, field transformation. After we generalize the mimetism techniques to heat diffusion, we solved the associated inverse problem which consists of the camouflage of given objects, that is, objects with shape or conductivity that are before hand chosen. The results allowed us to emphasize the class of transformations which hold the physical parameters, hence giving more pragmatism to the field of mimetism. Then we addressed the case of thermal radiation and proved for the first time that mimetism effects could also be controlled in this field, on the basis of the fluctuation/dissipation theorem. In a second step, we built an original technique able to predict the thermal radiation from objects of arbitrary shapes. This technique involves inhomogeneous, anisotropic, chiral and nonlocal metasurfaces. We also show how to take more benefits of metasurfaces in order to replace the bulk mimetism cloaks. We believe this technique to give again more push forward to the field, though the mimetism efficiency now relies on the illumination conditions. Similar techniques are further developed to allow a practical use of discontinuous space transformations. Eventually, field transformation is introduced to complete all these results
Mahboubi, Amal Kheira. "Méthodes d'extraction, de suivi temporel et de caractérisation des objets dans les vidéos basées sur des modèles polygonaux et triangulés". Nantes, 2003. http://www.theses.fr/2003NANT2036.
Testo completoLeboucher, Julien. "Développement et évaluation de méthodes d'estimation des masses segmentaires basées sur des données géométriques et sur les forces externes : comparaison de modèles anthropométriques et géométriques". Valenciennes, 2007. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/e2504d99-e61b-4455-8bb3-2c47771ac853.
Testo completoUse of body segment parameters close to reality is of the utmost importance in order to obtain reliable kinetics during human motion analysis. Human body is modeled as a various number of solids in the majority of human movement studies. This research aims at developing and testing two methods for the estimation of these solid masses, also known as segment masses. Both methods are based on the static equilibrium principle for several solids. The first method’s goal is to provide with limb masses using total limb centre of mass and centre of pressure, projection on the horizontal plane of the total subject’s body centre of gravity, displacements. Ratio between these displacement being the same as the ratio of limb and total body masses, the knowledge of the latter allows for the calculation of the former. The second method aims at estimation all segment masses simultaneously by resolving series of static equilibrium equations, making the same assumption that centre of pressure is total body centre of mass projection and using segment centre of mass estimations. Interest of the new methods used in this research is due to the use of individual segment centre of mass estimations using a geometrical model together with material routinely utilized in human motion analysis in order to obtain estimates of body segment masses. Limb mass estimations method performs better predicting a posteriori center of mass displacement when compared to other methods. Some of the potential causes of the second method’s failure have been investigated through the study of centre of pressure location uncertainty
Sourty, Raphael. "Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Testo completoNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Talhi, Asma. "Proposition d’une modélisation unifiée du Cloud Manufacturing et d’une méthodologie d’implémentation, basées sur les ontologies d’inférence". Thesis, Paris, ENSAM, 2016. http://www.theses.fr/2016ENAM0017/document.
Testo completoIn this research, we introduce a methodology to build a CM architecture. Cloud Manufacturing is an emerging paradigm in which dynamically scalable and virtualized resources are provided to the users as services over the Internet. Our architecture serves as a platform for mapping users and manufacturing resources' providers with the aim of enhancing collaboration within Product Lifecycle Management (PLM) by reducing costs and development time. Since some providers may use different descriptions of their services we believe that semantic web technologies like ontologies are robust tools for mapping providers' descriptions and users' requests in order to find the suited service. Then, we use ontology to build the knowledge model of Cloud Manufacturing domain. The ontology defines the steps of the pro- duct lifecycle as services and also takes in account the Cloud Computing features (storage, Computing capacity, etc.). The Cloud Manufacturing ontology contributes to intelligent and automated service discovery and is included in a platform for mapping users and providers. The proposed methodology ASCI-Onto is inspired by ASDI framework (analysis-specification- design-implementation) which has already been used in supply chain, healthcare and manu- facturing domains. The goal of the new methodology consists of designing easily a library of components for a Cloud Manufacturing system. Finally, an example of application of this methodology with a simulation model, based on the CloudSim software, is presented. The goal is to help industrials to make their decisions in order to design Cloud Manufacturing systems
Khalil, Georges. "Synthèse et modélisation thermodynamique de nouvelles architectures supramoléculaires colorées basées sur des motifs TiO4N2". Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAF065/document.
Testo completoThe subject of this thesis belongs to the field of metallosupramolecular chemistry. We have developed a self-assembled chemistry in order to create multicomponent architectures formed around TiO4N2 units. From a synthetic point of view, the major goal of this work was to analyse the consequences of structural variations performed on one component on the resulting self- assembled architectures. These variations involved the 2,2'-bipyrimidine ligand as this ligand was employed to create a circular helicate [Ti3(L)3(bpym)3], which was used as a reference compound throughout this thesis. The first factor considered concerns the introduction of groups of different sizes on the backbone of the 2,2'-bipyrimidine. Then, the consequence of a ring size reduction of the nitrogenous bidentate ligand was evaluated. Finally, the denticity of the nitrogenous ligand was changed. The second aim of this thesis was to model the enthalpy and entropy factors governing these self-assembled reactions driven by titanium (IV) centers. We were able to propose a general thermodynamic modeling methodology by using the PACHA software, through the sole knowledge of crystalline structures of the starting and final products
Hamouda, Ossama Mohamed Fawzi. "Modélisation et évaluation de la sûreté de fonctionnement d'applications véhiculaires basées sur des réseaux ad-hoc mobiles". Toulouse 3, 2010. http://thesesups.ups-tlse.fr/998/.
Testo completoThis thesis focuses on developing methods and models making it possible to evaluate quantitative measures characterizing the dependability of mobile services as perceived by the users. These models and measures are aimed at providing support to the designers during the selection and analysis of candidate architectures that are well suited to fulfill the dependability requirements. We consider the case of vehicular applications using inter-vehicle communications based on ad-hoc networks and may have access to services located on fixed infrastructure. We propose an approach combining: 1) dependability models based on stochastic activity networks, in order to describe the system failure modes and associated recovery mechanisms, and 2) simulation and analytical models allowing the estimation of connectivity characteristics, taking into account different mobility scenarios and environment. This approach is illustrated on three case studies including a virtual black box based on cooperative data replication and backup, and an automated highway system (Platooning application)
Djelassi, Abir. "Modélisation et prédiction des franchissements de barrières basées sur l'utilité espérée et le renforcement de l'apprentissage : application à la conduite automobile". Valenciennes, 2007. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/0cb14aca-51f3-4cd3-8727-f89febd00519.
Testo completoRisk analysis in Human-machine system (HMS) has to take into account human errors to limit their occurrences or consequences. That’s why, HMS designers define many barriers in the HMS environment. However, these barriers may be removed by human operators. That’s why it’s necessary to integrate the barrier removal in HMS design in order to better their design. The best way that insures this integration is the barrier removal modelling and prediction. This work proposes two linear models of the barrier removal utility: a generic one and a specific one. They integrate the different criteria related to the human operator activity, the Benefits, Costs and potential Deficits associated to these criteria, the weights αi, βi, and γi, the erreors εαi, εβi and εγi and the sensibility threshold Δu. The modification of these two last model’s elements provides an amelioration of the barrier removal utility value and so the barrier removal prediction. This new barrier removal prediction method was applied to the car driving domain. Its results are very interesting
Macor, Jose Luis. "Développement de techniques de prévision de pluie basées sur les propriétés multi-échelles des données radar et satellites". Phd thesis, Ecole des Ponts ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003420.
Testo completoEng, Catherine. "Développement de méthodes de fouille de données basées sur les modèles de Markov cachés du second ordre pour l'identification d'hétérogénéités dans les génomes bactériens". Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10041/document.
Testo completoSecond-order Hidden Markov Models (HMM2) are stochastic processes with a high efficiency in exploring bacterial genome sequences. Different types of HMM2 (M1M2, M2M2, M2M0) combined to combinatorial methods were developed in a new approach to discriminate genomic regions without a priori knowledge on their genetic content. This approach was applied on two bacterial models in order to validate its achievements: Streptomyces coelicolor and Streptococcus thermophilus. These bacterial species exhibit distinct genomic traits (base composition, global genome size) in relation with their ecological niche: soil for S. coelicolor and dairy products for S. thermophilus. In S. coelicolor, a first HMM2 architecture allowed the detection of short discrete DNA heterogeneities (5-16 nucleotides in size), mostly localized in intergenic regions. The application of the method on a biologically known gene set, the SigR regulon (involved in oxidative stress response), proved the efficiency in identifying bacterial promoters. S. coelicolor shows a complex regulatory network (up to 12% of the genes may be involved in gene regulation) with more than 60 sigma factors, involved in initiation of transcription. A classification method coupled to a searching algorithm (i.e. R’MES) was developed to automatically extract the box1-spacer-box2 composite DNA motifs, structure corresponding to the typical bacterial promoter -35/-10 boxes. Among the 814 DNA motifs described for the whole S. coelicolor genome, those of sigma factors (B, WhiG) could be retrieved from the crude data. We could show that this method could be generalized by applying it successfully in a preliminary attempt to the genome of Bacillus subtilis
Khalfaoui, Souhaiel. "Production automatique de modèles tridimensionnels par numérisation 3D". Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00841916.
Testo completoMercenne, Alexis. "Réactions nucléaires dans le modèle en couches de Gamow et solutions de l’Hamiltonien d’appariemment basées sur le modèle rationnel de Gaudin". Caen, 2016. http://hal.in2p3.fr/tel-01469139.
Testo completoMoving towards drip lines, or higher in excitation energy, the continuum coupling becomes gradually more important, changing the nature of weakly bound states. In this regime, atomic nuclei are open quantum systems which can be conveniently described using the Gamow shell model (GSM) which offers a fully symmetric treatment of bound, resonance and scattering states. The understanding of specific nuclear properties is often improved by considering exactly solvable models, motivated by a symmetry of the many-body system. In the first part , we have generalized the rational Gaudin pairing model to include the continuous part of the single-particle spectrum, and then derived a reliable algebraic solution which generalizes the exact Richardson solution for bound states. These generalized Richardson solutions have been applied for the description of binding energies and spectra in the long chain of carbon isotopes. In the second part, we have formulated the reaction theory rooted in GSM. For that the GSM is expressed in the basis of reaction channels and generalized for multi-nucleon projectiles. This reaction theory respects the antisymmetrization of target and projectile wave functions, as well as the wave function of the combined system. The application of this theory have been presented for the reaction 14O(p,p’)14O, where the combined system 15F is a proton emitter, and for 40Ca(d,d)40Ca
Belhocine, Latifa. "Nouvelles stratégies de remanufacturing intégrées et multicritères basées sur la performance des produits et les profils des clients". Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0057.
Testo completoBuilding an exemplary green reputation has become essential for industries in all sectors. To this end, and with the aim of reducing production waste, resource consumption, and greenhouse gas emissions, manufacturers are turning to the adoption of the circular economy concept. Among the processes used in this context, we find remanufacturing, which extends the life cycle and use of products in three main steps: recovery and collection of used products, remanufacturing operations, and redistribution of remanufactured products.In this thesis, we are interested in an application of the remanufacturing process as a preventive measure. We consider products in use by customers, over a finite horizon, and we proceed to the recovery of these products at predetermined times to refurbish them or improve their working condition, and then to redistribute them for further use. The overall objective is to develop economically and environmentally efficient remanufacturing strategies, integrating the different phases of remanufacturing and considering the characteristics of the product and the conditions of its use.First, we consider two major decisions in the remanufacturing activity. The first one is related to the stock capacity and consists in selecting the products to be recovered and their number. Since each product is characterized by a grade, we are interested in the second decision to determine the level of remanufacturing and the grade to be reached for each recovered product.Second, by extension of the first problem, we integrate the characteristics of the products and the profiles of the customers. We focus on the structure of the product composed of several functionalities and characterized by a performance determining its grade and depending on the quality of execution of the functionalities. We also consider the profile of the product user, based on the frequency of usage impacting the realization of the product functionalities and grade. Thus, a high frequency of usage gives a lower execution quality of the features. We develop the mathematical model to make the link between the performance of a product and the quality of execution of its features. We propose at this stage the multi-objective optimization of the recovery stage independently of the other remanufacturing action decisions.Finally, we consider that a product is made of several components, each of which intervenes in a specific way in the execution of the functionalities offered by the product. Moreover, each component is characterized by a given performance. In this case, the customer profile, i.e., the frequency of use, impacts the performance of each component and thus the overall performance of the product. We develop the mathematical model that allows calculating the performance of a product according to the performance of its components and their relationship with its functionalities. In addition to decisions on product selection and remanufacturing levels, we integrate the optimization of the transportation and recovery stage of used products.For each problem studied, a multi-objective mathematical model is developed, solved using metaheuristics and/or heuristics. In addition, multi-criteria decision analysis is performed to help the decision-maker to determine the best remanufacturing alternatives to implement. Several numerical experiments have been performed to illustrate the applicability of the different approaches proposed
Semensato, Bárbara Ilze. "Les capacités dynamiques pour l'innovation et les modèles d'internationalisation des entreprises basées sur les nouvelles technologies : une étude de cas multiple avec les PME Brésiliennes". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAG009.
Testo completoThe globalization of markets and the growing international competitiveness in the last two decades have provided the entry of competing firms in the market, among which are small firms. Notably recognized for their social and economic importance, small enterprises of the industry, trade and services sectors of activity are, in numerical terms, the vast majority of businesses in Brazil. Given the importance of this object of research, this research has as general objective to explore the relationship between the innovation orientation and the internationalization patterns of small and medium enterprises (SMEs). To achieve this general objective, three specific objectives are drawn, which are the study of the internationalization process and patterns of small-and-medium-sized technology based firms and the study of dynamic capabilities for innovation inherent to the distinct internationalization process and patterns of the SMEs. The dynamic capabilities for innovation drive the technological innovation development, namely, innovation in products, processes and services, also fostering the non-technological innovation development, in other words, the Marketing and the organizational. In addition, the dynamic capabilities impact positively on the competitiveness of small businesses in domestic and international markets. The theoretical basis of this research lies in the Internationalization Theories, from the Behavioral School and the Economic School, for Business Internationalization, and the Innovation Theories, referring to the Dynamic Capabilities for Innovation. In order to better understand the object of research, for each topic there is a section concerning to the SMEs. The sectoral diversity of the participants firms contributed to the magnitude of results on the dynamic capabilities for innovation of Brazilian SMEs, as well as to identify their internationalization patterns. From a qualitative study, the analysis show that Brazilian SMEs seek to differentiate through innovation in their international operating markets. Regarding the internationalization patterns of Brazilian SMEs, they differ in some criteria than shown in the literature. Therefore, the analysis of dynamic capabilities for innovation shows that small Brazilian companies have high potential for the innovation development, even with the existence of external barriers. Concerning the internationalization, the SMEs of the study have specific international patterns, requiring, therefore, criteria approaches in relation to literature. As academic contributions, the research presents the analysis of dynamic capabilities for innovation related to the pattern of internationalization of Brazilian SMEs, presenting emerging variables from the research themes. Finally, as managerial contributions, the analysis of the cases enables verifying how firms seek to position themselves competitively in international markets
Ramdani, Linda. "Stratégies de vaccination basées sur l’exposition de peptides ou de protéines à la surface de particules virales : modèles de l’adénovirus 5 et du bactériophage T5". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS597.
Testo completoViral vectors capable of inducing the expression of an antigen of interest and VLP composed by proteins that can self-assemble into a non-infectious but immunogenic viruslike particles have shown their potential to induce immune responses and protect against different pathogens. During my work, I became interested in two vaccination approaches based on the exposure of antigens, epitopes or proteins, on the surface of viral particles. In the first part of my thesis, I evaluated the ability of epitope-displaying adenoviruses to induce cellular immune responses in mice. Different adenoviral vectors with capsid modified by insertion of a T epitope derived from the ovalbumin model antigen have been produced and I have shown that these vectors are capable of inducing CD8+ cellular responses. In addition, I observed that this epitope display strategy was more effective in inducing cellular responses when the epitope was inserted into the hexon, regardless of the host's status towards Ad. These observations led me, in the 2nd part of my thesis, to evaluate the ability of the epitope display to induce cellular responses against a therapeutic target, the human papillomavirus. I have constructed and characterized different vectors displaying T epitopes derived from the E6 and E7 proteins of HPV16 or HPV18. Then, I analyzed their ability to induce anti-HPV cellular responses in mice. Among the different vectors produced, one Ad vector displaying a T epitope derived from HPV16 E7 protein induced CD8+ LT responses against E7 protein in mice. Finally, in the last part of my thesis, I evaluated the ability of T5 bacteriophage capsids exposing a protein fused with the ovalbumin antigen on their surface to induce humoral and cellular immune responses against this antigen. I have shown that these ovalbuminexposed capsids generate strong humoral and cellular responses. The results obtained allowed to precise the molecular bases of the effectiveness of vaccination by exposure of epitopes (epitope display) to the surface of adenoviral vectors or exposure of proteins (protein display) to the surface of capsids of the T5 phage
Aubert, Brice. "Détection des courts-circuits inter-spires dans les Générateurs Synchrones à Aimants Permanents : Méthodes basées modèles et filtre de Kalman étendu". Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/11902/1/Aubert.pdf.
Testo completoHoareau, Violette. "Etudes des mécanismes de maintien en mémoire de travail chez les personnes jeunes et âgées : approches computationnelle et comportementale basées sur les modèles TBRS* et SOB-CS". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS050/document.
Testo completoWorking memory is a cognitive system essential to our daily life. It allows us to temporarily store information in order to perform a cognitive task. One of the main features of this type of memory is to be limited in capacity. The reasons for this limitation are widely debated in the literature. Some models consider that a main cause of forgetting in working memory is the existence of a passive temporal decay in the activation of memory representations whereas other models assume that interference between information are sufficient to explain the limited capacity of this memory. Two computational models have recently been proposed (TBRS* and SOB-CS) and they perfectly illustrate this debate. Indeed, they both describe differently what happens during a working memory task involving both storage and information processing. In addition to opposing the causes of forgetting, they propose separate maintenance processes: refreshing relevant information according to TBRS* versus removing irrelevant information according to SOB-CS. This thesis was organized around two main objectives. First, we focused on the study of these two models and their maintenance mechanisms. To do so, we performed behavioral experiments using the complex span task to test specific hypotheses of these models. Second, using computational models, we investigated the causes of working memory deficits observed in the elderly, with the aim, in the long term, of creating or improving remediation tools. Regarding the first objective, results showed a discrepancy between human behavior and simulations. Indeed, TBRS* and SOB-CS did not reproduce a positive effect of the number of distractors contrary to what has been observed experimentally. We propose that this positive effect, not predicted by the models, is related to the long-term storage not taken into account in these two models. Regarding the second objective, the behavioral results suggest that older people would have difficulty mainly in refreshing memory traces and in stabilizing information in the long term during a complex task. Overall, the results of this thesis suggest to deepen the research on the links between the maintenance mechanisms and the long-term storage, for example by proposing a new computational model accounting for our results. Beyond advances in understanding the functioning of working memory, this thesis also shows that the use of computational models is of particular relevance for the study of a theory as well as for the comparison of different populations
Hamouda, Ossama. "Dependability modelling and evaluation of vehicular applications based on mobile ad-hoc networks. Modélisation et évaluation de la sûreté de fonctionnement d'applications véhiculaires basées sur des réseaux ad-hoc mobiles". Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00546260.
Testo completoBotta, Laura. "Impact de la correction du risque supplémentaire de décès non lié au cancer sur l'estimation des indicateurs de survie nette et de guérison". Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCI023.
Testo completoIn the relative survival (RS) framework, cancer survivors are typically assumed to have, in addition to the specific risk due to cancer, the same mortality risk as individuals from the general population with the same demographic characteristics. This assumption is used in the conventional mixture cure model to estimate the proportion of patients that will not die of the cancer ie the statistical cure fraction (CF). However, this assumption does not always hold.My hypothesis is that survivors, even if their cancer has been permanently cured, may have an additional non-cancer mortality risk compared to the general population. This could be due to long-term side effects of treatments, second cancers, or exposure to lifestyle or environmental risk factors. A previous analysis of United States population data showed the presence of relative risks of non-cancer deaths compared to the general population higher than 1.Ignoring the extra non-cancer mortality risk to which cancer survivors are exposed can lead to biased estimates of CF and other relevant survival indicators, e.g. net survival. Research on methods for accurately estimating the background mortality of cancer survivors is increasing .This research aims at testing the reliability and robustness of the new mixture cure model that accounts for this risk in different settings using a simulations study. Building on these results, the method will be applied to real data, estimating the extra non-cancer mortality risk for cancer patients and the corrected CF, focusing on some adult cancers and also on children and adolescents and young adults (AYAs). AYAs are defined as those diagnosed at ages 15-39, in line with the international definition proposed by the European Network for Cancer in Children and Adolescents (ENCCA).Previous studies have shown that survivors of childhood and adolescent cancer have increased mortality risks compared to the general population. Deaths in these survivors are mainly due to the original cancer, followed by second malignant neoplasms and side effects of treatment.The aim of this part of the project is to apply the new mixture cure model to small populations such as AYAs and to a population with a reduced risk of death in the baseline population, i.e. children. To our knowledge, the literature on the use of mixture cure models for AYAs and childhood cancer patients is scarce.For the application of the model to real data, the EUROCARE 6 database was used to illustrate the extra risk of non-cancer death and its impact on net survival and CF estimates. The EUROpean CAncer REgistry based study on survival and care of cancer patients (EUROCARE) is a collaborative research initiative focused on population-based cancer survival in Europe. The EUROCARE research team is based at the Istituto Nazionale Tumori di Milano (INT) and the Istituto Superiore di Sanità in Rome, and I am a member of the EUROCARE 6 Researchers Committee.These results will hopefully help in the discussion for the definition of the “Right to be forgotten for cancer patients” and address the management of late side effects, which is rarely addressed in epidemiological studies. The work will be carried out in collaboration with Fondazione IRCCS Istituto Tumori di Milano
Berriri, Asma. "Model based testing techniques for software defined networks". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL017/document.
Testo completoHaving gained momentum from its concept of decoupling the traffic control from the underlying traffic transmission, Software Defined Networking (SDN) is a new networking paradigm that is progressing rapidly addressing some of the long-standing challenges in computer networks. Since they are valuable and crucial for networking, SDN architectures are subject to be widely deployed and are expected to have the greatest impact in the near future. The emergence of SDN architectures raises a set of fundamental questions about how to guarantee their correctness. Although their goal is to simplify the management of networks, the challenge is that the SDN software architecture itself is a complex and multi-component system which is failure-prone. Therefore, assuring the correct functional behaviour of such architectures and related SDN components is a task of paramount importance, yet, decidedly challenging.How to achieve this task, however, has only been intensively investigated using formal verification, with little attention paid to model based testing methods. Furthermore, the relevance of models and the efficiency of model based testing have been demonstrated for software engineering and particularly for network protocols. Thus, the creation of efficient and reusable model based testing approaches becomes an important stage before the deployment of virtual networks and related components. The problem addressed in this thesis relates to the use of formal models for guaranteeing the correct functional behaviour of SDN architectures and their corresponding components. Formal, and effective test generation approaches are in the primary focus of the thesis. In addition, automation of the test process is targeted as it can considerably cut the efforts and cost of testing.The main contributions of the thesis relate to model based techniques for deriving high quality test suites. Firstly, a method relying on graph enumeration is proposed for the functional testing of SDN architectures. Secondly, a method based on logic circuit is developed for testing the forwarding functionality of an SDN switch. Further on, the latter method is extended to test an application of an SDN controller. Additionally, a technique based on an extended finite state machine is introduced for testing the switch-to-controller communication. As the quality of a test suite is usually measured by its fault coverage, the proposed testing methods introduce different fault models and seek for test suites with guaranteed fault coverage that can be stated as sufficient conditions for a test suite completeness / exhaustiveness
Berriri, Asma. "Model based testing techniques for software defined networks". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL017.
Testo completoHaving gained momentum from its concept of decoupling the traffic control from the underlying traffic transmission, Software Defined Networking (SDN) is a new networking paradigm that is progressing rapidly addressing some of the long-standing challenges in computer networks. Since they are valuable and crucial for networking, SDN architectures are subject to be widely deployed and are expected to have the greatest impact in the near future. The emergence of SDN architectures raises a set of fundamental questions about how to guarantee their correctness. Although their goal is to simplify the management of networks, the challenge is that the SDN software architecture itself is a complex and multi-component system which is failure-prone. Therefore, assuring the correct functional behaviour of such architectures and related SDN components is a task of paramount importance, yet, decidedly challenging.How to achieve this task, however, has only been intensively investigated using formal verification, with little attention paid to model based testing methods. Furthermore, the relevance of models and the efficiency of model based testing have been demonstrated for software engineering and particularly for network protocols. Thus, the creation of efficient and reusable model based testing approaches becomes an important stage before the deployment of virtual networks and related components. The problem addressed in this thesis relates to the use of formal models for guaranteeing the correct functional behaviour of SDN architectures and their corresponding components. Formal, and effective test generation approaches are in the primary focus of the thesis. In addition, automation of the test process is targeted as it can considerably cut the efforts and cost of testing.The main contributions of the thesis relate to model based techniques for deriving high quality test suites. Firstly, a method relying on graph enumeration is proposed for the functional testing of SDN architectures. Secondly, a method based on logic circuit is developed for testing the forwarding functionality of an SDN switch. Further on, the latter method is extended to test an application of an SDN controller. Additionally, a technique based on an extended finite state machine is introduced for testing the switch-to-controller communication. As the quality of a test suite is usually measured by its fault coverage, the proposed testing methods introduce different fault models and seek for test suites with guaranteed fault coverage that can be stated as sufficient conditions for a test suite completeness / exhaustiveness
Mahmoudysepehr, Mehdi. "Modélisation du comportement du tunnelier et impact sur son environnement". Thesis, Centrale Lille Institut, 2020. http://www.theses.fr/2020CLIL0028.
Testo completoThis PhD thesis research work consists in understanding the behavior of the TBM according to the environment encountered in order to propose safe, durable and quality solutions for the digging of the tunnel.The main objective of this doctoral thesis work is to better understand the behavior of the TBM according to its environment. Thus, we will explore how the TBM reacts according to the different types of terrain and how it acts on the various elements of tunnel structure (voussoirs). This will make it possible to propose an intelligent and optimal dimensioning of the voussoirs and instructions of adapted piloting
Araldi, Alessandro. "Distribution des commerces et forme urbaine : Modèles orientés-rue pour la Côte d'Azur". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR2018.
Testo completoThis doctoral dissertation analyses and discusses the relationship between the spatial distribution of retail and urban form. More precisely, in this work, we focus on the spatial statistical relationships which occur between the localisation of small and average-sized stores and the physical properties of the urban form in the metropolitan area of the French Riviera. The underlying hypothesis of this research is that the physical characteristics of the built-up landscape might influence how humans perceive and use urban space, and, ultimately, how stores are distributed and organised within cities. In the last two decades, scholars have been increasingly investigating this relationship. Nonetheless, both retail and urban form characteristics are often reduced to the simple notions of store density and street-network configuration respectively. Several aspects, such as the retail morpho-functional agglomeration typology, the geometrical characteristics of the streetscape and the contextual influence of the urban fabric are traditionally excluded from these analyses. These aspects should be even more important when studying highly heterogeneous metropolitan areas like the French Riviera, a combination of differently-sized cities and paradigmatic morphological regions: medieval centres, modern and contemporary planned areas, and suburban sprawl. To overcome these limitations, computer-aided, theory-based protocols are accurately selected and developed in this dissertation, allowing for the extraction of quantitative measures of retail and urban form. In particular, starting from traditional theories of retail geography and urban morphology, two location-based network-constrained procedures are proposed and implemented, providing a fine-grained description of the urban and retail fabrics at the street-level. These methodologies are based on innovative combinations of geoprocessing and AI-based protocols (Bayesian Networks). The statistical relationship between retail and urban morphological descriptors isinvestigated through the implementation of several statistical regression models. The decomposition of the study area in morphological subregions both at the meso- and macroscale, combined with the implementation of penalised regression procedures, enables the identification of specific combinations of urban morphological characteristics and retail patterns. In the case of the French Riviera, the outcomes of these models confirm the statistical significance of the relationship between street-network configurational properties and retail distribution. Nevertheless, the role of specific streetscape morphometric variables is demonstrated to be also a relevant aspect of the urban form when investigating the retail distribution. Finally, the morphological context both at the meso- and macro- scale is demonstrated to be a key factor in explaining the distribution of retail within a large urban area
Amroun, Hamdi. "Modèles statistiques avancés pour la reconnaissance de l’activité physique dans un environnement non contrôlé en utilisant un réseau d’objets connectés". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS406/document.
Testo completoWith the arrival of connected objects, the recognition of physical activity is experiencing a new era. New considerations need to be taken into account in order to achieve a better treatment process. In this thesis, we explored the treatment process for recognizing physical activity in an uncontrolled environment. The recognized physical activities, with only one inertial unit (accelerometer, gyroscope and magnetometer), are called elementary. Other types of context-dependent activities are called "context-based". We extracted the DCT as the main descriptor for the recognition of elementary activities. In order to recognize the physical activities based on the context, we defined three levels of granularity: a first level depending on embedded connected objects (smartphone, smartwatch and samrt TV . A second level concerns the study of participants' behaviors interacting with the smart TV screen. The third level concerns the study of participants' attention to TV. We took into consideration the imperfection aspect of the data by merging the multi sensor data with the Dempster-Shafer model. As such, we have proposed different approaches for calculating and approximating mass functions. In order to avoid calculating and selecting the different descriptors, we proposed an approach based on the use of deep learning algorithms (DNN). We proposed two models: a first model consisting of recognizing the elementary activities by selecting the DCT as the main descriptor (DNN-DCT). The second model is to learn raw data from context-based activities (CNN-raw). The disadvantage of the DNN-DCT model is that it is fast but less accurate, while the CNN-raw model is more accurate but very slow. We have proposed an empirical study to compare different methods that can accelerate learning while maintaining a high level of accuracy. We thus explored the method of optimization by particle swarm (PSO). The results are very satisfactory (97%) compared to deep neural network with stochastic gradients descent and Nesterov accelerated Gradient optimization. The results of our work suggest the use of good descriptors in the case where the context matters little, the taking into account of the imperfection of the sensor data requires that it be used and faster models
Braun, Mathias. "Reduced Order Modelling and Uncertainty Propagation Applied to Water Distribution Networks". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0050/document.
Testo completoWater distribution systems are large, spatially distributed infrastructures that ensure the distribution of potable water of sufficient quantity and quality. Mathematical models of these systems are characterized by a large number of state variables and parameter. Two major challenges are given by the time constraints for the solution and the uncertain character of the model parameters. The main objectives of this thesis are thus the investigation of projection based reduced order modelling techniques for the time efficient solution of the hydraulic system as well as the spectral propagation of parameter uncertainties for the improved quantification of uncertainties. The thesis gives an overview of the mathematical methods that are being used. This is followed by the definition and discussion of the hydraulic network model, for which a new method for the derivation of the sensitivities is presented based on the adjoint method. The specific objectives for the development of reduced order models are the application of projection based methods, the development of more efficient adaptive sampling strategies and the use of hyper-reduction methods for the fast evaluation of non-linear residual terms. For the propagation of uncertainties spectral methods are introduced to the hydraulic model and an intrusive hydraulic model is formulated. With the objective of a more efficient analysis of the parameter uncertainties, the spectral propagation is then evaluated on the basis of the reduced model. The results show that projection based reduced order models give a considerable benefit with respect to the computational effort. While the use of adaptive sampling resulted in a more efficient use of pre-calculated system states, the use of hyper-reduction methods could not improve the computational burden and has to be explored further. The propagation of the parameter uncertainties on the basis of the spectral methods is shown to be comparable to Monte Carlo simulations in accuracy, while significantly reducing the computational effort
Delotterie, David. "Translational potential of the touchscreen-based methodology to assess cognitive abilities in mice". Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAJ048/document.
Testo completoThis thesis work aimed to specify the potential of an innovative methodology latterly adapted in mice from neuropsychological tasks used in Humans. After the optimization of 3 assays (PAL, VMCL, PVD) taxing various cognitive functions in animals, different behavioral studies have gradually revealed: (1) the putative existence of proactive interferences over consecutive learnings in touchscreen tasks; (2) no acquisition deficit in Tg2576 mice (a transgenic model of Alzheimer’s Disease) in these paradigms, whatever the amyloid load considered; (3) the specific involvement of the dorsal striatum during the acquisition of VMCL and PAL tasks and the key role of the hippocampus during the recall of the latter task. As exemplified by the PAL task, our results suggest that despite momentous efforts in order to ensure the translational feature of touchscreen cognitive tasks, certain adaptations inherent to each species deeply influence the nature of underlying neurobiological substrates