Dissertations / Theses on the topic 'Markov, Processus de – Méthodes graphiques'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Markov, Processus de – Méthodes graphiques.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tabourier, Lionel. "Méthode de comparaison des topologies de graphes complexes : applications aux réseaux sociaux." Paris 6, 2010. http://www.theses.fr/2010PA066335.
Full textRadoszycki, Julia. "Résolution de processus décisionnels de Markov à espace d'état et d'action factorisés - Application en agroécologie." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0022/document.
Full textThis PhD thesis focuses on the resolution of problems of sequential decision makingunder uncertainty, modelled as Markov decision processes (MDP) whose state and actionspaces are both of high dimension. Resolution of these problems with a good compromisebetween quality of approximation and scaling is still a challenge. Algorithms for solvingthis type of problems are rare when the dimension of both spaces exceed 30, and imposecertain limits on the nature of the problems that can be represented.We proposed a new framework, called F3MDP, as well as associated approximateresolution algorithms. A F3MDP is a Markov decision process with factored state andaction spaces (FA-FMDP) whose solution policies are constrained to be in a certainfactored form, and can be stochastic. The algorithms we proposed belong to the familyof approximate policy iteration algorithms and make use of continuous optimisationtechniques, and inference methods for graphical models.These policy iteration algorithms have been validated on a large number of numericalexperiments. For small F3MDPs, for which the optimal global policy is available, theyprovide policy solutions that are close to the optimal global policy. For larger problemsfrom the graph-based Markov decision processes (GMDP) subclass, they are competitivewith state-of-the-art algorithms in terms of quality. We also show that our algorithmsallow to deal with F3MDPs of very large size outside the GMDP subclass, on toy problemsinspired by real problems in agronomy or ecology. The state and action spaces arethen both of dimension 100, and of size 2100. In this case, we compare the quality of thereturned policies with the one of expert policies. In the second part of the thesis, we applied the framework and the proposed algorithms to determine ecosystem services management strategies in an agricultural landscape.Weed species, ie wild plants of agricultural environments, have antagonistic functions,being at the same time in competition with the crop for resources and keystonespecies in trophic networks of agroecosystems. We seek to explore which organizationsof the landscape (here composed of oilseed rape, wheat and pasture) in space and timeallow to provide at the same time production services (production of cereals, fodder andhoney), regulation services (regulation of weed populations and wild pollinators) andcultural services (conservation of weed species and wild pollinators). We developed amodel for weeds and pollinators dynamics and for reward functions modelling differentobjectives (production, conservation of biodiversity or trade-off between services). Thestate space of this F3MDP is of size 32100, and the action space of size 3100, which meansthis F3MDP has substantial size. By solving this F3MDP, we identified various landscapeorganizations that allow to provide different sets of ecosystem services which differ inthe magnitude of each of the three classes of ecosystem services
Viricel, Clement. "Contributions au développement d'outils computationnels de design de protéine : méthodes et algorithmes de comptage avec garantie." Thesis, Toulouse, INSA, 2017. http://www.theses.fr/2017ISAT0019/document.
Full textThis thesis is focused on two intrinsically related subjects : the computation of the normalizing constant of a Markov random field and the estimation of the binding affinity of protein-protein interactions. First, to tackle this #P-complete counting problem, we developed Z*, based on the pruning of negligible potential quantities. It has been shown to be more efficient than various state-of-the-art methods on instances derived from protein-protein interaction models. Then, we developed #HBFS, an anytime guaranteed counting algorithm which proved to be even better than its predecessor. Finally, we developed BTDZ, an exact algorithm based on tree decomposition. BTDZ has already proven its efficiency on intances from coiled coil protein interactions. These algorithms all rely on methods stemming from graphical models : local consistencies, variable elimination and tree decomposition. With the help of existing optimization algorithms, Z* and Rosetta energy functions, we developed a package that estimates the binding affinity of a set of mutants in a protein-protein interaction. We statistically analyzed our esti- mation on a database of binding affinities and confronted it with state-of-the-art methods. It appears that our software is qualitatively better than these methods
Roy, Valérie. "Autograph : un outil de visualisation pour les calculs de processus." Nice, 1990. http://www.theses.fr/1990NICE4381.
Full textMourad, Mahmoud. "Comparaison entre méthodes vectorielles autorégressives et méthodes markoviennes dans l'analyse de séries chronologiques multidimensionnelles." Paris 5, 1990. http://www.theses.fr/1990PA05H033.
Full textThis thesis aims at putting into practive the autoregressive models and the application of the markov chain on remainders of the additive seasonal decomposition models -mds-. Our analysis applied to the french industriel production sector has enable us to detect first the optimum autoregressive orders which were suggested by the automatic criterions fpe, aic, bic, cat an hq, then the dynamic "carrying" parameters on which are based the specific memory -univaried case- and the crossed memory -vectorial case-. We used the mds model to estimate the trend and the seasonal coefficients then we applied the markovian model of the first order to analyse the nature of the transitions dependencies between the different states of the remainders. The comparaison of the forcast efficiency between the different models showed the importance of the short-term vectorial autoregressive model, the long-term mds model ant the part that can play the markovian model as a forescast "corrector" for the mds model
Boudaren, Mohamed El Yazid. "Modèles graphiques évidentiels." Phd thesis, Institut National des Télécommunications, 2014. http://tel.archives-ouvertes.fr/tel-01004504.
Full textDascalu, Daniel. "Méthodes probabilistes pour la modélisation de la maintenance préventive." Compiègne, 2002. http://www.theses.fr/2002COMP1386.
Full textIdoumghar, Lhassane. "Méthodes algorithmiques pour l'allocation de fréquences." Nancy 1, 2002. http://www.theses.fr/2002NAN10235.
Full textGenerating a frequency plan is a difficult task in the radionetwork planning process, that may lead to obtain frequency plans with poor efficiency under real propagation conditions. In fact, the generation process uses a modelling of existing constraints between transmitters of the radionetwork under study, and a combinatorial optimization that tries to satisfy those constraints. This combinatorial optimization provides an optimal solution from a mathematical viewpoint, but according to the refinement of constraints modelling, the generated solution can be unusable under real propagation conditions. In this thesis, we intoduce new algorithmic approaches to solve the Frequency Assignment Problem in the field of radiobroadcasting. The experiments performed on real radionetworks show that the results obtained by these approaches are better than the best operational solutions that are existing in this domain
Descombes, Xavier. "Méthodes stochastiques en analyse d'image : des champs de Markov aux processus ponctuels marqués." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2004. http://tel.archives-ouvertes.fr/tel-00506084.
Full textDelplancke, Claire. "Méthodes quantitatives pour l'étude asymptotique de processus de Markov homogènes et non-homogènes." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30107/document.
Full textThe object of this thesis is the study of some analytical and asymptotic properties of Markov processes, and their applications to Stein's method. The point of view consists in the development of functional inequalities in order to obtain upper-bounds on the distance between probability distributions. The first part is devoted to the asymptotic study of time-inhomogeneous Markov processes through Poincaré-like inequalities, established by precise estimates on the spectrum of the transition operator. The first investigation takes place within the framework of the Central Limit Theorem, which states the convergence of the renormalized sum of random variables towards the normal distribution. It results in the statement of a Berry-Esseen bound allowing to quantify this convergence with respect to the chi-2 distance, a natural quantity which had not been investigated in this setting. It therefore extends similar results relative to other distances (Kolmogorov, total variation, Wasserstein). Keeping with the non-homogeneous framework, we consider a weakly mixing process linked to a stochastic algorithm for median approximation. This process evolves by jumps of two sorts (to the right or to the left) with time-dependent size and intensity. An upper-bound on the Wasserstein distance of order 1 between the marginal distribution of the process and the normal distribution is provided when the latter is invariant under the dynamic, and extended to examples where only the asymptotic normality stands. The second part concerns intertwining relations between (homogeneous) Markov processes and gradients, which can be seen as refinment of the Bakry-Emery criterion, and their application to Stein's method, a collection of techniques to estimate the distance between two probability distributions. Second order intertwinings for birth-death processes are stated, going one step further than the existing first order relations. These relations are then exploited to construct an original and universal method of evaluation of discrete Stein's factors, a key component of Stein-Chen's method
Pérez, Frédéric. "Travail de l'archai͏̈que et processus graphiques : le fractal comme moyen d'émergence du chaos dans un "groupe psychotique" : son repérage dans les processus graphiques : ou un outil méthodologique pour une survie psychique." Lyon 2, 2004. http://theses.univ-lyon2.fr/documents/lyon2/2004/perez_f.
Full textIn a therapeutic approach with defective persons (autistics, psychotics) there is a risk for the therapeutist to loose his marks when his work on bonds is attacked. Only few thought and theorized studies are available. To fill up with this lack, this study will initially reconsider various metapsychological notions about archaic and deficit. In this research, graphic lines are considered as much as an investigation object as a methodological tool. Thus, all the tools choosen (group, graphic lines) will be studied from their base, and from their psychic and physical points of intersection. The assumption suggested here will aim at deepening thoughtfully our methodological tools, to enquire their effectiveness in the paintings / drawings device we will submit to an archaic group. The graphic lines notion will be studied as the smallest common factors in the intra and inter subjective organisation. Graphic lines, as well as physical lines, are created from a group environment. All the data are not part of the euclidien physical environment. On a methodological point of view, there will be two studies : a longitudinal study of the link between graphism and intersubjectivity (for all our clinical cases) and a work based on the analysis of paintings dealing also with archaic questions and the analysis of the bases of dreams. We will define notions of archaic group, primary intersubjectivity, inertia-carrier and figuration-carrier. Lastly, we will enquire about the therapeutic approach using notions such as psychic attractors in a fractalisationʺ process
Youcef, Samir. "Méthodes et outils d'évaluation de performances des services web." Paris 9, 2009. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2009PA090031.
Full textService Oriented Architecture (SOA) has certainly provided answers for many problems that previous technologies, like RMI and CORBA, could not offer. They mainly provide methodological answers to ensure interoperability and low coupling between heterogeneous information systems (IS). However, Web services create problems of various kinds such as adaptation to change the dynamic behavior of a service provider and quality of service (QoS) delivered. It is therefore essential to develop methods and tools to monitor and analyze the QoS delivered by the services. This PH. D. Thesis stands justly for the context of developing methods and tools for Web services performance evaluation. For this goal, we have approached the subject from three aspects, namely, the exact computation, the bounds computation for the average response time of Web services and taken into account the quality of service in the discovery and the selection of Web services. For the first aspect, we have proposed analytical formulas for the exact computation and analysis of average response time of the various of standard BPEL constructors. For the second aspect, we proposed upper bounds for the response time of a composite Web service. The analysis in this section is that of continous Markov chain (DTMC) and the technique used is the processes coupling. For the third aspect, we have proposed an extension of the conventional Web services architecture in ordre to take into account the QoS in their discovery and selection
Magnan, Jean-Christophe. "Représentations graphiques de fonctions et processus décisionnels Markoviens factorisés." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066042/document.
Full textIn decision theoretic planning, the factored framework (Factored Markovian Decision Process, FMDP) has produced several efficient algorithms in order to resolve large sequential decision making under uncertainty problems. The efficiency of this algorithms relies on data structures such as decision trees or algebraïc decision diagrams (ADDs). These planification technics are exploited in Reinforcement Learning by the architecture SDyna in order to resolve large and unknown problems. However, state-of-the-art learning and planning algorithms used in SDyna require the problem to be specified uniquely using binary variables and/or to use improvable data structure in term of compactness. In this book, we present our research works that seek to elaborate and to use a new data structure more efficient and less restrictive, and to integrate it in a new instance of the SDyna architecture. In a first part, we present the state-of-the-art modeling tools used in the algorithms that tackle large sequential decision making under uncertainty problems. We detail the modeling using decision trees and ADDs. Then we introduce the Ordered and Reduced Graphical Representation of Function, a new data structure that we propose in this thesis to deal with the various problems concerning the ADDs. We demonstrate that ORGRFs improve on ADDs to model large problems. In a second part, we go over the resolution of large sequential decision under uncertainty problems using Dynamic Programming. After the introduction of the main algorithms, we see in details the factored alternative. We indicate the improvable points of these factored versions. We describe our new algorithm that improve on these points and exploit the ORGRFs previously introduced. In a last part, we speak about the use of FMDPs in Reinforcement Learning. Then we introduce a new algorithm to learn the new datastrcture we propose. Thanks to this new algorithm, a new instance of the SDyna architecture is proposed, based on the ORGRFs : the SPIMDDI instance. We test its efficiency on several standard problems from the litterature. Finally, we present some works around this new instance. We detail a new algorithm for efficient exploration-exploitation compromise management, aiming to simplify F-RMax. Then we speak about an application of SPIMDDI to the managements of units in a strategic real time video game
Brandejsky, Adrien. "Méthodes numériques pour les processus markoviens déterministes par morceaux." Phd thesis, Bordeaux 1, 2012. http://tel.archives-ouvertes.fr/tel-00733731.
Full textLefebvre, Yannick. "Nouveaux développements et justifications de méthodes de calcul de mesures de performance en sûreté de fonctionnement." Université de Marne-la-Vallée, 2003. http://www.theses.fr/2003MARN0210.
Full textIn this thesis, we are interested in the computation of the reliability and availability of complex systems. At first, we consider the case of a Markov process for which the usual quantification methods may be inefficient : the number of states of the system is very important, and the system failure is a rare event. Several algorithms were proposed in literature to compute reliability in such models by investigating the paths leading to the system failure. We extend two of these algorithms in order to improve the efficiency of the reliability computation and to allow the long-run availability computation. In the second part of the thesis, our aim is to take into account the components ageing in the system availability computation, more precisely in the framework of a fault-tree model. We unify numerous ageing models proposed in literature by means of a very general dynamic reliability model. The mathematical properties of this model suggest an accelerated Monte-Carlo simulation method. They also clarify the validity and the limits of a simple approach based on the use of software initially designed to process the case of non-ageing components. Finally, we study a particular ageing model based on the phase method
Parisot, Sarah. "Compréhension, modélisation et détection de tumeurs cérébrales : modèles graphiques et méthodes de recalage/segmentation simultanés." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00944541.
Full textBao, Ran. "Modélisation formelle de systèmes de drones civils à l’aide de méthodes probabilistes paramétrées." Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4002.
Full textUnmanned Aerial Vehicles (UAV) are now widespread in our society and are often used in a context where they can put people at risk. Studying their reliability, in particular in the context of flight above a crowd, thus becomes a necessity. In this thesis, we study the modeling and analysis of UAV in the context of their flight plan. To this purpose, we build several parametrics probabilistic models of the UAV and use them, as well as a given flight plan, in order to model its trajectory. Our most advanced model takes into account the precision of position estimation by embedded sensors, as well as wind force and direction. The model is analyzed in order to measure the probability that the drone enters an unsafe zone. Because of the nature and complexity of the obtained models, their exact verification using classical tools such as PRISM or PARAM is impossible. We therefore develop a new approximation method, called Parametric Statistical Model Checking. This method has been implemented in a prototype tool, which we tested on this complex case study. Finally, we use the result to propose some ways to improve the safety of the public in our context
El, Haddad Rami. "Méthodes quasi-Monte Carlo de simulation des chaînes de Markov." Chambéry, 2008. http://www.theses.fr/2008CHAMS062.
Full textMonte Carlo (MC) methods are probabilistic methods based on the use of random numbers in repeated simulations to estimate some parameter. Their deterministic versions are called Quasi-Monte Carlo (QMC) methods. The idea is to replace pseudo-random points by deterministic quasi-random points (also known as low-discrepancy point sets or sequences). In this work, we propose and analyze QMC-based algorithms for the simulation of multidimensional Markov chains. The quasi-random points we use are (T,S)-sequences in base B. After recalling the principles of MC and QMC methods and their main properties, we introduce some plain financial models, to serve in the following as numerical examples to test the convergence of the proposed schemes. We focus on problems where the exact solution is known, in order to be able to compute the error and to compare the efficiency of the various schemes In a first part, we consider discrete-time Markov chains with S-dimensional state spaces. We propose an iterative QMC scheme for approximating the distribution of the chain at any time. The scheme uses a (T,S+1)-sequence in base b for the transitions. Additionally, one needs to re-order the copies of the chain according to their successive components at each time-step. We study the convergence of the scheme by making some assumptions on the transition matrix. We assess the accuracy of the QMC algorithm through financial examples. The results show that the new technique is more efficient than the traditional MC approach. Then, we propose a QMC algorithm for the simulation of Markov chains with multidimensional continuous state spaces. The method uses the same re-ordering step as in the discrete setting. We provide convergence results in the case of one dimensional chains and then in the case of multidimensional chains, by making additional assumptions. We illustrate the convergence of the algorithm through numerical experiments. The results show that the new method converges faster than the MC algorithm. In the last part, we consider the problem of the diffusion equation in a spatially nonhomogeneous medium. We use a random walk algorithm, in conjunction with a correction of the Gaussian Steplength. We write a QMC variant of the algorithm, by adapting the principles seen for the simulation of the Markov chains. We test the method in dimensions 1, 2 and 3 on a problem involving the diffusion of calcium ions in a biological medium. In all the simulations, the results of QMC computations show a strong improvement over MC outcomes. Finally, we give some perspectives and directions for future work
Keskessa, Bachir. "Contribution à la modélisation didactique d'outils graphiques dans la maîtrise d'un processus en temps réel." Grenoble 2, 1998. http://www.theses.fr/1998GRE29052.
Full textWe tackle the problem of functional graphic tools in technology in the transition secondary school/high school. The functional graphic tools appear to be linked to the system of the technical objects within the productive system. The notions of incident, internality and externality introduced, have allowed a fine analysis of semiotic elements in the graphic instrumentation within the production. The experimental study shows different significant graphic skills according to the internal or external modes of the productive system. The comparison of these two modes has enabled a modelization of the graphic tool for the command of a productive system in real time aiming at teaching and training
Casarin, Roberto. "Méthodes de simulation pour l'estimation bayésienne des modèles à variables latentes." Paris 9, 2007. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2007PA090056.
Full textLatent variable models are now very common in econometrics and statistics. This thesis mainly focuses on the use of latent variables in mixture modelling, time series analysis and continuous time models. We follow a Bayesian inference framework based on simulation methods. In the third chapter we propose alfa-stable mixtures in order to account for skewness, heavy tails and multimodality in financial modelling. Chapter four proposes a Markov-Switching Stochastic-Volatility model with a heavy-tail observable process. We follow a Bayesian approach and make use of Particle Filter, in order to filter the state and estimate the parameters. Chapter five deals with the parameter estimation and the extraction of the latent structure in the volatilities of the US business cycle and stock market valuations. We propose a new regularised SMC procedure for doing Bayesian inference. In chapter six we employ a Bayesian inference procedure, based on Population Monte Carlo, to estimate the parameters in the drift and diffusion terms of a stochastic differential equation (SDE), from discretely observed data
Younès, Sana. "Model checking stochastique par les méthodes de comparaison stochastique." Versailles-St Quentin en Yvelines, 2008. http://www.theses.fr/2008VERS0008.
Full textNous proposons dans cette thèse une nouvelle méthode de vérification des chaînes de Markov. La vérification de ces modèles est effectuée à partir des distributions transitoires ou stationnaire de la chaîne de Markov considérée. Nous utilisons les méthodes de comparaison stochastique pour obtenir des mesures bornantes afin de vérifier la chaîne considérée. Ces mesures sont obtenues par la construction d'une chaîne bornante à la chaîne initiale qui est en générale de très grande taille. Les chaînes bornantes construites doivent être plus simples à analyser permettant de construire des bornes pour les modèles dont la résolution numérique est difficile voire impossible. Nous avons exploré certains schémas pour construire des chaînes bornantes comme la lumpabilité et la classe C. Nous avons développé également d'autres schémas de construction de chaînes bornantes sur les chaînes de Markov censurées. Il est évident que les mesures bornantes ne permettent pas toujours de conclure. Dans ce cas il faut affiner le modèle bornant si le schéma de borne le permet. Nous avons montré que les méthodes de bornes que nous proposons sont pertinentes pour la vérification de chaînes de Markov et permettent de réduire remarquablement le temps de vérification
Mercier, Sophie. "Modèles stochastiques et méthodes numériques pour la fiabilité." Habilitation à diriger des recherches, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00368100.
Full textNous nous intéressons ensuite au remplacement préventif de composants devenus obsolescents, du fait de l'apparition de nouveaux composants plus performants. Le problème est ici de déterminer la stratégie optimale de remplacement des anciens composants par les nouveaux. Les résultats obtenus conduisent à des stratégies très différentes selon que les composants ont des taux de panne constants ou non.
Les travaux suivants sont consacrés à l'évaluation numérique de différentes quantités fiabilistes, les unes liées à des sommes de variables aléatoires indépendantes, du type fonction de renouvellement par exemple, les autres liées à des systèmes markoviens ou semi-markoviens. Pour chacune de ces quantités, nous proposons des bornes simples et aisément calculables, dont la précision peut être ajustée en fonction d'un pas de temps. La convergence des bornes est par ailleurs démontrée, et des algorithmes de calcul proposés.
Nous nous intéressons ensuite à des systèmes hybrides, issus de la fiabilité dynamique, dont l'évolution est modélisée à l'aide d'un processus de Markov déterministe par morceaux (PDMP). Pour de tels systèmes, les quantités fiabilistes usuelles ne sont généralement pas atteignables analytiquement et doivent être calculées numériquement. Ces quantités s'exprimant à l'aide des lois marginales du PDMP (les lois à t fixé), nous nous attachons plus spécifiquement à leur évaluation. Pour ce faire, nous commençons par les caractériser comme unique solution d'un système d'équations intégro-différentielles. Puis, partant de ces équations, nous proposons deux schémas de type volumes finis pour les évaluer, l'un explicite, l'autre implicite, dont nous démontrons la convergence. Nous étudions ensuite un cas-test issu de l'industrie gazière, que nous modélisons à l'aide d'un PDMP, et pour lequel nous calculons différentes quantités fiabilistes, d'une part par méthodes de volumes finis, d'autre part par simulations de Monte-Carlo. Nous nous intéressons aussi à des études de sensibilité : les caractéristiques d'un PDMP sont supposées dépendre d'une famille de paramètres et le problème est de comparer l'influence qu'ont ces différents paramètres sur un critère donné, à horizon fini ou infini. Cette étude est faite au travers des dérivées du critère d'étude par rapport aux paramètres, dont nous démontrons l'existence et que nous calculons.
Enfin, nous présentons rapidement les travaux effectués par Margot Desgrouas lors de sa thèse consacrée au comportement asymptotique des PDMP, et nous donnons un aperçu de quelques travaux en cours et autres projets.
Ziani, Ahmed. "Interprétation en temps réel de séquence vidéo par exploitation des modèles graphiques probabilistes." Littoral, 2010. http://www.theses.fr/2010DUNK0271.
Full textThe research covers the design and implementation of systems for recognition of scenarios in video image sequences. The upper layers of the recognition system operating primarily graphical probabilistic approaches (Bayesian networks and Hidden Markov models and their extensions) that can effectively handle uncertainties in the interpretation system. A first algorithm for recognition of sequences of events, combining two extensions of HMM (hierarchical and semi-Markov) was proposed. It allows to model complex scenarios based on a hierarchical structure integrating temporal constraints on the duration of each event. Then, we proposed a prediction technique based on the recognition of early tracks and allows quick to dismiss the models may be consistent with the observations. The last part of the work was the development of a global structure and a modular recognition system scenarios. The main advantage of this architecture is to use probabilistic techniques while integrating temporal reasoning capabilities. The logical architecture of the system uses a multi agents. In order to manage real-time constraints of the application, the control strategy of the recognition systems enables a minimum number of agents according to its internal decisions. The agents of the first layer has a role to highlight the basic events and are constructed mainly of Bayesian networks or hidden Markov models. The agents of the second temporal layer are also built from a specific structure type Bayesian network. Their role is to model explicitly the temporal relationships between events highlighted from the first layer. The third level officials involved in the final stage of decision using all of the decisions of intermediate agents. Different approaches to recognition of scenarios were tested on various real images in external and internal environment
Marchand, Jean-Louis. "Conditionnement de processus markoviens." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00733301.
Full textVerzelen, Nicolas. "Modèles graphiques gaussiens et sélection de modèles." Phd thesis, Université Paris Sud - Paris XI, 2008. http://tel.archives-ouvertes.fr/tel-00352802.
Full textEn utilisant le lien entre modèles graphiques et régression linéaire à plan d'expérience gaussien, nous développons une approche basée sur des techniques de sélection de modèles. Les procédures ainsi introduites sont analysés d'un point de vue non-asymptotique. Nous prouvons notamment des inégalités oracles et des propriétés d'adaptation au sens minimax valables en grande dimension. Les performances pratiques des méthodes statistiques sont illustrées sur des données simulées ainsi que sur des données réelles.
Altaleb, Anas. "Méthodes d'échantillonnage par mélanges et algorithmes MCMC." Rouen, 1999. http://www.theses.fr/1999ROUES034.
Full textDridi, Noura. "Estimation aveugle de chaînes de Markov cachées simples et doubles : Application au décodage de codes graphiques." Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0022.
Full textSince its birth, the technology of barcode is well investigated for automatic identification. When reading, a barcode can be degraded by a blur , caused by a bad focalisation and/ or a camera movement. The goal of this thesis is the optimisation of the receiver of 1D and 2D barcode from hidden and double Markov model and blind statistical estimation approaches. The first phase of our work consists of modelling the original image and the observed one using Hidden Markov model. Then, new algorithms for joint blur estimation and symbol detection are proposed, which take into account the non-stationarity of the hidden Markov process. Moreover, a method to select the most relevant model of the blur is proposed, based on model selection criterion. The method is also used to estimate the blur length. Finally, a new algorithm based on the double Markov chain is proposed to deal with digital communication through a long memory channel. Estimation of such channel is not possible using the classical detection algorithms based on the maximum likelihood due to the prohibitive complexity. New algorithm giving good trade off between complexity and performance is provided
Ferrante, Enzo. "Recalage déformable à base de graphes : mise en correspondance coupe-vers-volume et méthodes contextuelles." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC039/document.
Full textImage registration methods, which aim at aligning two or more images into one coordinate system, are among the oldest and most widely used algorithms in computer vision. Registration methods serve to establish correspondence relationships among images (captured at different times, from different sensors or from different viewpoints) which are not obvious for the human eye. A particular type of registration algorithm, known as graph-based deformable registration methods, has become popular during the last decade given its robustness, scalability, efficiency and theoretical simplicity. The range of problems to which it can be adapted is particularly broad. In this thesis, we propose several extensions to the graph-based deformable registration theory, by exploring new application scenarios and developing novel methodological contributions.Our first contribution is an extension of the graph-based deformable registration framework, dealing with the challenging slice-to-volume registration problem. Slice-to-volume registration aims at registering a 2D image within a 3D volume, i.e. we seek a mapping function which optimally maps a tomographic slice to the 3D coordinate space of a given volume. We introduce a scalable, modular and flexible formulation accommodating low-rank and high order terms, which simultaneously selects the plane and estimates the in-plane deformation through a single shot optimization approach. The proposed framework is instantiated into different variants based on different graph topology, label space definition and energy construction. Simulated and real-data in the context of ultrasound and magnetic resonance registration (where both framework instantiations as well as different optimization strategies are considered) demonstrate the potentials of our method.The other two contributions included in this thesis are related to how semantic information can be encompassed within the registration process (independently of the dimensionality of the images). Currently, most of the methods rely on a single metric function explaining the similarity between the source and target images. We argue that incorporating semantic information to guide the registration process will further improve the accuracy of the results, particularly in the presence of semantic labels making the registration a domain specific problem.We consider a first scenario where we are given a classifier inferring probability maps for different anatomical structures in the input images. Our method seeks to simultaneously register and segment a set of input images, incorporating this information within the energy formulation. The main idea is to use these estimated maps of semantic labels (provided by an arbitrary classifier) as a surrogate for unlabeled data, and combine them with population deformable registration to improve both alignment and segmentation.Our last contribution also aims at incorporating semantic information to the registration process, but in a different scenario. In this case, instead of supposing that we have pre-trained arbitrary classifiers at our disposal, we are given a set of accurate ground truth annotations for a variety of anatomical structures. We present a methodological contribution that aims at learning context specific matching criteria as an aggregation of standard similarity measures from the aforementioned annotated data, using an adapted version of the latent structured support vector machine (LSSVM) framework
Ramdane, Saïd. "Identification automatique de types de formulaires par des méthodes stochastiques markoviennes." Le Havre, 2002. http://www.theses.fr/2002LEHA0018.
Full textIdentification of forms is a significant operation of an automatic system of reading. No distinctive sign is supposed to mark the form. The treatment starts with the extraction of the rectangular blocks of texts or rectangles including the drawings or the images. Since the forms include handwritten fields, the position, dimensions of the rectangular blocks present are variable. The phenomena of merging and fragmentation resulting from the segmentation induce an additional variability in the number of the rectangles. This double variability of the rectangles is naturally random. A first statistical method carries out the recognition by the calculation of a distance, which generalizes the Mahalanobis distance. Learning requires taking care of the phenomenon of merging/fragmentation. This statistical model appears to be actually a Markovian stochastic model of order 0. A second stochastic method rests on the construction of planar hidden Markov models (PHMM: Pseudo-2D Hidden Markov Model). We describe in particular a new unsupervised training of the states number by a dynamic aggregation method. The recognition is based on the estimation of the conditional probability calculated by an extension of a doubly imbricated Viterbi algorithm. For the two methods, we sought to make automatic all the phases of the training and the recognition. The experimental results confirm the validity of the two methods
Kadi, Imène Yamina. "Simulation et monotonie." Versailles-St Quentin en Yvelines, 2011. http://www.theses.fr/2011VERS0029.
Full textThe work of this thesis concern the contribution of the monotony in simulation methods. Initially, we focus on the study of different monotonicity notions used in stochastic modeling, trying to define the relationships between them. Three concepts have been defined in this field: the stochastic monotonicity based on stochastic comparison, the realizable monotony and finally the events monotony used in the perfect simulation. This study allowed us to use the stochastic monotonicity properties under the monotone perfect simulation. On the other hand, we have proposed monotone invertible encodings for systems whose natural representation is not monotone. This encoding allows to accelerate monotonous simulations and found its application in the simulation of optical burst. Another work was done in the field of parallel simulation, it use monotonicity properties of simulated systems to better parallelize the simulation process. This should produce a substantial acceleration in simulations
Rozas, Rony. "Intégration du retour d'expérience pour une stratégie de maintenance dynamique." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1112/document.
Full textThe optimization of maintenance strategies is a major issue for many industrial applications. It involves establishing a maintenance plan that ensures security levels, security and high reliability with minimal cost and respecting any constraints. The increasing number of works on optimization of maintenance parameters in particular in scheduling preventive maintenance action underlines the importance of this issue. A large number of studies on maintenance are based on a modeling of the degradation of the system studied. Probabilistic Models Graphics (PGM) and especially Markovian PGM (M-PGM) provide a framework for modeling complex stochastic processes. The issue with this approach is that the quality of the results is dependent on the model. More system parameters considered may change over time. This change is usually the result of a change of supplier for replacement parts or a change in operating parameters. This thesis deals with the issue of dynamic adaptation of a maintenance strategy, with a system whose parameters change. The proposed methodology is based on change detection algorithms in a stream of sequential data and a new method for probabilistic inference specific to the dynamic Bayesian networks. Furthermore, the algorithms proposed in this thesis are implemented in the framework of a research project with Bombardier Transportation. The study focuses on the maintenance of the access system of a new automotive designed to operate on the rail network in Ile-de-France. The overall objective is to ensure a high level of safety and reliability during train operation
Coq, Guilhelm. "Utilisation d'approches probabilistes basées sur les critères entropiques pour la recherche d'information sur supports multimédia." Poitiers, 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Coq-Guilhelm/2008-Coq-Guilhelm-These.pdf.
Full textModel selection problems appear frequently in a wide array of applicative domains such as data compression and signal or image processing. One of the most used tools to solve those problems is a real quantity to be minimized called information criterion or penalized likehood criterion. The principal purpose of this thesis is to justify the use of such a criterion responding to a given model selection problem, typically set in a signal processing context. The sought justification must have a strong mathematical background. To this end, we study the classical problem of the determination of the order of an autoregression. . We also work on Gaussian regression allowing to extract principal harmonics out of a noised signal. In those two settings we give a criterion the use of which is justified by the minimization of the cost resulting from the estimation. Multiple Markov chains modelize most of discrete signals such as letter sequences or grey scale images. We consider the determination of the order of such a chain. In the continuity we study the problem, a priori distant, of the estimation of an unknown density by an histogram. For those two domains, we justify the use of a criterion by coding notions to which we apply a simple form of the “Minimum Description Length” principle. Throughout those application domains, we present alternative methods of use of information criteria. Those methods, called comparative, present a smaller complexity of use than usual methods but allow nevertheless a precise description of the model
Bercu, Sophie. "Modélisation stochastique du signal écrit par chaînes de Markov cachées : application à la reconnaissance automatique de l'écriture manuscrite." Rennes 1, 1994. http://www.theses.fr/1994REN10115.
Full textNivot, Christophe. "Analyse et étude des processus markoviens décisionnels." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0057/document.
Full textWe investigate the potential of the Markov decision processes theory through two applications. The first part of this work is dedicated to the numerical study of an industriallauncher integration process in co-operation with Airbus DS. It is a particular case of inventory control problems where a launch calendar has a key role. The model we propose implies that standard optimization techniques cannot be used. We then investigate two simulation-based algorithms. They return non trivial optimal policies which can be applied in actual practice. The second part of this work deals with the study of partially observable optimal stopping problems. We propose an approximation method using optimal quantization for problems with general state space. We study the convergence of the approximated optimal value towards the real optimal value. The convergence rate is also under study. We apply our method to a numerical example
Dubarry, Cyrille. "Méthodes de lissage et d'estimation dans des modèles à variables latentes par des méthodes de Monte-Carlo séquentielles." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00762243.
Full textAtaya, Abbas. "Renforcement des méthodes de prise de décision par des a priori pour la mesure automatique de l'activité physique des personnes." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0071/document.
Full textAdvances in technology have led to the miniaturization of motion sensors, facilitating their use in small comfortable wearable devices. Such devices are of great interest in the biomedical field especially for applications aimed at estimating the daily physical activity of people. In this thesis, we propose signal processing algorithms allowing better interpretation of sensors measures and thus their mapping to different activities. Our approach is based on that activities have strong temporal dependencies. We propose a recognition system that models the activity sequence by a Markov chain. Our system relies on parametric and non-parametric classification methods. The soft output of classifiers permits the construction of confidence measures in the activities. These measures are later used as input to a Viterbi algorithm that gives the final estimation of the activity sequence. We validate our algorithms using a database containing 48 subjects, each of whom having carried out activities for more than 90 minutes. Moreover, this thesis aims at providing practical answers to challenges concerning the development of an activity recognition system. First of all, we wonder about the optimal sensor placement on the body, and about the number of sensors needed for a reliable estimation of activities. We also approach the problem of selecting relevant features for classifiers. Another crucial issue concerns the estimation of the sensor’s orientation on the body : this involves the problem of sensor calibration. Finally, we provide a “real-time” implementation of our system, and collect a database under realistic conditions to validate our implemented real-time demostrator
Coquidé, Célestin. "Analyse de réseaux complexes réels via des méthodes issues de la matrice de Google." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD038.
Full textIn a current period where people use more and more the Internet and are connected worldwide, our lives become easier. The Network science, a recent scientific domain coming from graph theory, handle such connected complex systems. A network is a mathematical object consisting in a set of interconnected nodes and a set of links connecting them. We find networks in nature such as networks of mycelium which grow underground and are able to feed their cells with organic nutrients located at low and long range from them, as well as the circulation system transporting blood throughout the human body. Networks also exist at a human scale where humans are nodes of such networks. In this thesis we are interested in what we call real complex networks which are networks constructed from databases. We can extract information which is normally hard to get since such a network might contain one million of nodes and one hundred times more links. Moreover, networks we are going to study are directed meaning that links have a direction. One can represent a random walk through a directed network with the use of the so-called Google matrix. The PageRank is the leading eigenvector associated to this stochastic matrix and allows us to measure nodes importance. We can also build a smaller Google matrix based on the Google matrix and a subregion of the network. This reduced Google matrix allows us to extract every existing links between the nodes composing the subregion of interest as well as all possible indirect connections between them by spreading through the entire network. With the use of tools developed from the Google matrix, especially the reduced Google matrix, considering the network of Wikipedia's articles we have identified interactions between universities of the world as well as their influence. We have extracted social trends by using data related to actual Wikipedia's users behaviour. Regarding the World Trade Network, we were able to measure economic response of the European Union to external petroleum and gas price variation. Regarding the World Network of economical activities we have figured out interdependence of sectors of production related to powerhouse such as The United States of America and China. We also built a crisis contagion model we applied on the World Trade Network and on the Bitcoin transactions Network
Stupfler, Gilles Claude. "Un modèle de Markov caché en assurance et estimation de frontière et de point terminal." Strasbourg, 2011. https://publication-theses.unistra.fr/public/theses_doctorat/2011/STUPFLER_Gilles_Claude_2011.pdf.
Full textThis thesis is divided into two parts. We first focus on introducing a new model for loss processes in insurance: it is a process (J, N, S) where (J, N) is a Markov-modulated Poisson process and S is a process whose components are piecewise constant, nondecreasing processes. The increments of S are assumed to be conditionally independent given (J, N). Assuming further that the distribution of the jumps of S belongs to some parametric family, it is shown that the maximum likelihood estimator (MLE) of the parameters of this model is strongly consistent. An EM algorithm is given to help compute the MLE in practice. The method is used on real insurance data and its performances are examined on some finite sample situations. In an independent second part, the extreme-value theory problem of estimating the (finite) right endpoint of a cumulative distribution function F is considered : given a sample of independent copies of a random variable X with distribution function F, an estimator of the right endpoint of F is designed, using a high order moments method. We first consider the case when X is nonnegative, and the method is then generalised to the case when X is an arbitrary random variable with finite right endpoint by introducing another estimator. The asymptotic properties of both estimators are studied, and their performances are examined on some simulations. Based upon that work, an estimator of the frontier of the support of a random pair is constructed. Its asymptotic properties are discussed, and the method is compared to other classical methods in this framework
Mrad, Mehdi. "Méthodes numériques d'évaluation et de couverture des options exotiques multi-sous-jacents : modèles de marché et modèles à volatilité incertaine." Paris 1, 2008. http://www.theses.fr/2008PA010010.
Full textBentayeb, Salah. "Modélisation et performances des boucles à verrouillage de phase numériques en présence de bruit." Toulouse, ENSAE, 1985. http://www.theses.fr/1985ESAE0005.
Full textJacquemart, Damien. "Contributions aux méthodes de branchement multi-niveaux pour les évènements rares, et applications au trafic aérien." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S186/document.
Full textThe thesis deals with the design and mathematical analysis of reliable and accurate Monte Carlo methods in order to estimate the (very small) probability that a Markov process reaches a critical region of the state space before a deterministic final time. The underlying idea behind the multilevel splitting methods studied here is to design an embedded sequence of intermediate more and more critical regions, in such a way that reaching an intermediate region, given that the previous intermediate region has already been reached, is not so rare. In practice, trajectories are propagated, selected and replicated as soon as the next intermediate region is reached, and it is easy to accurately estimate the transition probability between two successive intermediate regions. The bias due to time discretization of the Markov process trajectories is corrected using perturbed intermediate regions as proposed by Gobet and Menozzi. An adaptive version would consist in the automatic design of the intermediate regions, using empirical quantiles. However, it is often difficult if not impossible to remember where (in which state) and when (at which time instant) did each successful trajectory reach the empirically defined intermediate region. The contribution of the thesis consists in using a first population of pilot trajectories to define the next threshold, in using a second population of trajectories to estimate the probability of exceeding this empirically defined threshold, and in iterating these two steps (definition of the next threshold, and evaluation of the transition probability) until the critical region is reached. The convergence of this adaptive two-step algorithm is studied in the asymptotic framework of a large number of trajectories. Ideally, the intermediate regions should be defined in terms of the spatial and temporal variables jointly (for example, as the set of states and times for which a scalar function of the state exceeds a time-dependent threshold). The alternate point of view proposed in the thesis is to keep intermediate regions as simple as possible, defined in terms of the spatial variable only, and to make sure that trajectories that manage to exceed a threshold at an early time instant are more replicated than trajectories that exceed the same threshold at a later time instant. The resulting algorithm combines importance sampling and multilevel splitting. Its preformance is evaluated in the asymptotic framework of a large number of trajectories, and in particular a central limit theorem is obtained for the relative approximation error
Cannaméla, Claire. "Apport des méthodes probabilistes dans la simulation du comportement sous irradiation du combustible à particules." Paris 7, 2007. http://www.theses.fr/2007PA077082.
Full textTThis work is devoted to the evaluation of mathematica! expectations in the context of structural reliability. We seek a failure probability estimate (that we assume low), taking into account the uncertainty of influential parameters of the System. Our goal is to reach a good copromise between the accuracy of the estimate and the associated computational cost. This approach is used to estimate the failure probability of fuel particles from a HTR-type nuclear reactor. This estimate is obtain by means of costly numerical simulations. We consider differents probabilistic methods to tackle the problem. First, we consider a variance reducing Monte Carlo method : importance sampling. For the parametric case, we propose adaptive algorithms in order to build a series of probability densities that will eventually converge to optimal importance density. We then present several estimates of the mathematical expectation based on this series of densities. Next, we consider a multi-level method using Monte Carlo Markov Chain algorithm. Finally, we turn our attention to the related problem of quantile estimation (non extreme) of physical output from a large-scale numerical code. We propose a controled stratification method. The random input parameters are sampled in specific regions obtained from surrogate of the response. The estimation of the quantile is then computed from this sample
Harb, Ali. "Faisabilité. Méthodes non standard pour la stabilité des réseaux de files d'attente. GI/GI/q + G." Rouen, 1998. http://www.theses.fr/1998ROUES075.
Full textMessika, Stéphane. "Méthodes probabilistes pour la vérification des systèmes distribués." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2004. http://tel.archives-ouvertes.fr/tel-00136083.
Full textBen, Mamoun Mouad. "Encadrements stochastiques et évaluation de performances des réseaux." Versailles-St Quentin en Yvelines, 2002. http://www.theses.fr/2002VERS0012.
Full textForsell, Nicklas. "Planification dans le risque et l'incertain : optimisation des stratégies de gestion spatiale des forêts." Toulouse 3, 2009. http://www.theses.fr/2009TOU30260.
Full textThis thesis concentrates on the optimization of large-scale management policies under conditions of risk and uncertainty. In paper I, we address the problem of solving large-scale spatial and temporal natural resource management problems. To model these types of problems, the framework of graph-based Markov decision processes (GMDPs) can be used. Two algorithms for computation of high-quality management policies are presented: the first is based on approximate linear programming (ALP) and __ the second is based on mean-field approximation and approximate policy iteration (MF-API). The applicability and efficiency of the algorithms were demonstrated by their ability to compute near-optimal management policies for two large-scale management problems. It was concluded that the two algorithms compute policies of similar quality. However, the MF-API algorithm should be used when both the policy and the expected value of the computed policy are required, while the ALP algorithm may be preferred when only the policy is required. In paper II, a number of reinforcement learning algorithms are presented that can be used to compute management policies for GMDPs when the transition function can only be simulated because its explicit formulation is unknown. Studies of the efficiency of the algorithms for three management problems led us to conclude that some of these algorithms were able to compute near-optimal management policies. In paper III, we used the GMDP framework to optimize long-term forestry management policies under stochastic wind-damage events. The model was demonstrated by a case study of an estate consisting of 1,200 ha of forest land, divided into 623 stands. We concluded that managing the estate according to the risk of wind damage increased the expected net present value (NPV) of the whole estate only slightly, less than 2%, under different wind-risk assumptions. Most of the stands were managed in the same manner as when the risk of wind damage was not considered. However, the analysis rests on properties of the model that need to be refined before definite conclusions can be drawn
Guidara, Rima. "Méthodes markoviennes pour la séparation aveugle de signaux et images." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/705/.
Full textThis thesis presents new Markovian methods for blind separation of instantaneous linear mixtures of one-dimensional signals and images. In the first part, we propose several improvements to an existent method for separating temporal signals. The new method exploits simultaneously non-Gaussianity, autocorrelation and non-stationarity of the sources. Excellent performance is obtained for the separation of artificial mixtures of speech signals, and we succeed to separate real mixtures of astrophysical spectra. An extension to image separation is then proposed. The dependence within the image pixels is modelled by non-symetrical half-plane Markov random fields. Very good performance is obtained for the separation of artificial mixtures of natural images and noiseless observations of the Planck satellite. The results obtained with a low level noise are acceptable
Hallouli, Khalid. "Reconnaissance de caractères par méthodes markoviennes et réseaux bayésiens." Phd thesis, Télécom ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00000740.
Full textTombuyses, Béatrice. "Modélisation markovienne en fiabilité: réduction des grands systèmes." Doctoral thesis, Universite Libre de Bruxelles, 1994. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212700.
Full textLa première partie de cette thèse concerne Ia modélisation d'installations industrielles et la construction de la matrice de transition. Le but poursuivi est le développement d'un code markovien permettant une description réaliste et aisée du système. Le système est décrit en termes de composants multiétats :pompes, vannes .
La définition d'une série de règles types permet l'introduction de dépendances entre composants. Grâce à la modélisation standardisée du système, un algorithme permettant la construction automatique de la matrice de transition est développé. L'introduction d'opérations de maintenance ou d'information est également présentée.
La seconde partie s'intéresse aux techniques de réduction de la taille de la matrice, afin de rendre possible le traitement de grosses installations. En effet, le nombre d'états croit exponentiellement avec le nombre de composants, ce qui limite habituellement les installations analysables à une dizaine de composants. Les techniques classiques de réduction sont passées en revue :
accessibilité des états,
séparation des groupes de composants indépendants,
symétrie et agrégation exacte des états (cfr Papazoglou). Il faut adapter la notion de symétrie des composants en tenant compte des dépendances pouvant exister entre composants.
Une méthode d'agrégation approchée pour le calcul de la fiabilité et de la disponibilité de groupes de composants à deux états est développée.
La troisième partie de la thèse contient une approche originale pour l'utilisation de la méthode markovienne. Il s'agit du développement d'une technique de réduction basée sur le graphe d'influence des composants. Un graphe d'influence des composants est construit à partir des dépendances existant entre composants. Sur base de ce graphe, un système markovien non homogène est construit, décrivant de manière approchée le comportement du système exact. Les résultats obtenus sur divers exemples sont très bons.
Une quatrième partie de cette thèse s'intéresse aux problèmes numériques liés à l'intégration du système différentiel du problème markovien. Ces problèmes résultent principalement du caractère stiff du système. Différentes méthodes classiques sont implantées pour l'intégration du système différentiel. Elles sont testées sur un exemple type de problème de fiabilité.
Pour finir, on trouve la présentation du code CAMERA dans lequel ont été implantées les différentes techniques présentées ci-dessus.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Zurita, Julio. "Styles cognitifs et processus d'acquisition des connaissances spatiales dans des environnements cartographiques : encodage et traitement de l'information en mémoire de travail, construction des représentations et des référentiels au cours de la localisation et l'orientation spatiale." Montpellier 3, 2002. http://www.theses.fr/2002MON30040.
Full textThe cognitive style (global vs analytical strategies) of the subjects is influenced by the performances of phonological data processing and visuo-spatial information (WM). The same is true of construction of spatial representations in route processing on both complex and simple indication levels. As soon as certain indices (semantic and iconic) are removed from these routes, the differences in spatial data processing are homogeneous whatever the strategy adopted by the subjects. It is thus possible to conclude that the difference in performance does not result solely from the perceptive mode of acquisition characterizing the cognitive style of the subjects, but also depends on the complexity and nature of the information to be processed. In addition, processing would be influenced by the mental load caused by the amount of information contained in the maps, as well as by the phenomenon of vicariance observed between strategies