Дисертації з теми "Méthodes basées sur les données"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Méthodes basées sur les données".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Hassani, Bertrand Kian. "Quantification des risques opérationnels : méthodes efficientes de calcul de capital basées sur des données internes." Paris 1, 2011. http://www.theses.fr/2011PA010009.
Bernard, Francis. "Méthodes d'analyse des données incomplètes incorporant l'incertitude attribuable aux valeurs manquantes." Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6571.
Leboucher, Julien. "Développement et évaluation de méthodes d'estimation des masses segmentaires basées sur des données géométriques et sur les forces externes : comparaison de modèles anthropométriques et géométriques." Valenciennes, 2007. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/e2504d99-e61b-4455-8bb3-2c47771ac853.
Use of body segment parameters close to reality is of the utmost importance in order to obtain reliable kinetics during human motion analysis. Human body is modeled as a various number of solids in the majority of human movement studies. This research aims at developing and testing two methods for the estimation of these solid masses, also known as segment masses. Both methods are based on the static equilibrium principle for several solids. The first method’s goal is to provide with limb masses using total limb centre of mass and centre of pressure, projection on the horizontal plane of the total subject’s body centre of gravity, displacements. Ratio between these displacement being the same as the ratio of limb and total body masses, the knowledge of the latter allows for the calculation of the former. The second method aims at estimation all segment masses simultaneously by resolving series of static equilibrium equations, making the same assumption that centre of pressure is total body centre of mass projection and using segment centre of mass estimations. Interest of the new methods used in this research is due to the use of individual segment centre of mass estimations using a geometrical model together with material routinely utilized in human motion analysis in order to obtain estimates of body segment masses. Limb mass estimations method performs better predicting a posteriori center of mass displacement when compared to other methods. Some of the potential causes of the second method’s failure have been investigated through the study of centre of pressure location uncertainty
Irain, Malik. "Plateforme d'analyse de performances des méthodes de localisation des données dans le cloud basées sur l'apprentissage automatique exploitant des délais de messages." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30195.
Cloud usage is a necessity today, as data produced and used by all types of users (individuals, companies, administrative structures) has become too large to be stored otherwise. It requires to sign, explicitly or not, a contract with a cloud storage provider. This contract specifies the levels of quality of service required for various criteria. Among these criteria is the location of the data. However, this criterion is not easily verifiable by a user. This is why research in the field of data localization verification has led to several studies in recent years, but the proposed solutions can still be improved. The work proposed in this thesis consists in studying solutions of location verification by a user, i.e. solutions that estimate data location and operate using landmarks. The implemented approach can be summarized as follows: exploiting communication delays and using network time models to estimate, with some distance error, data location. To this end, the work carried out is as follows: • A survey of the state of the art on the different methods used to provide users with location information. • The design of a unified notation for the methods studied in the survey, with a proposal of two scores to assess methods. • Implementation of a network measurements collecting platform. Thanks to this platform, two datasets were collected, at both national level and international level. These two data sets are used to evaluate the different methods presented in the state of the art survey. • Implementation of an evaluation architecture based on the two data sets and the defined scores. This allows us to establish the quality of the methods (success rate) and the quality of the results (accuracy of the result) thanks to the proposed scores
Cerra, Daniele. "Contribution à la théorie algorithmique de la complexité : méthodes pour la reconnaissance de formes et la recherche d'information basées sur la compression des données." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00562101.
Eng, Catherine. "Développement de méthodes de fouille de données basées sur les modèles de Markov cachés du second ordre pour l'identification d'hétérogénéités dans les génomes bactériens." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10041/document.
Second-order Hidden Markov Models (HMM2) are stochastic processes with a high efficiency in exploring bacterial genome sequences. Different types of HMM2 (M1M2, M2M2, M2M0) combined to combinatorial methods were developed in a new approach to discriminate genomic regions without a priori knowledge on their genetic content. This approach was applied on two bacterial models in order to validate its achievements: Streptomyces coelicolor and Streptococcus thermophilus. These bacterial species exhibit distinct genomic traits (base composition, global genome size) in relation with their ecological niche: soil for S. coelicolor and dairy products for S. thermophilus. In S. coelicolor, a first HMM2 architecture allowed the detection of short discrete DNA heterogeneities (5-16 nucleotides in size), mostly localized in intergenic regions. The application of the method on a biologically known gene set, the SigR regulon (involved in oxidative stress response), proved the efficiency in identifying bacterial promoters. S. coelicolor shows a complex regulatory network (up to 12% of the genes may be involved in gene regulation) with more than 60 sigma factors, involved in initiation of transcription. A classification method coupled to a searching algorithm (i.e. R’MES) was developed to automatically extract the box1-spacer-box2 composite DNA motifs, structure corresponding to the typical bacterial promoter -35/-10 boxes. Among the 814 DNA motifs described for the whole S. coelicolor genome, those of sigma factors (B, WhiG) could be retrieved from the crude data. We could show that this method could be generalized by applying it successfully in a preliminary attempt to the genome of Bacillus subtilis
Boudoin, Pierre. "L'interaction 3D adaptative : une approche basée sur les méthodes de traitement de données multi-capteurs." Phd thesis, Université d'Evry-Val d'Essonne, 2010. http://tel.archives-ouvertes.fr/tel-00553369.
Ta, Minh Thuy. "Techniques d'optimisation non convexe basée sur la programmation DC et DCA et méthodes évolutives pour la classification non supervisée." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0099/document.
This thesis focus on four problems in data mining and machine learning: clustering data streams, clustering massive data sets, weighted hard and fuzzy clustering and finally the clustering without a prior knowledge of the clusters number. Our methods are based on deterministic optimization approaches, namely the DC (Difference of Convex functions) programming and DCA (Difference of Convex Algorithm) for solving some classes of clustering problems cited before. Our methods are also, based on elitist evolutionary approaches. We adapt the clustering algorithm DCA–MSSC to deal with data streams using two windows models: sub–windows and sliding windows. For the problem of clustering massive data sets, we propose to use the DCA algorithm with two phases. In the first phase, massive data is divided into several subsets, on which the algorithm DCA–MSSC performs clustering. In the second phase, we propose a DCA–Weight algorithm to perform a weighted clustering on the obtained centers in the first phase. For the weighted clustering, we also propose two approaches: weighted hard clustering and weighted fuzzy clustering. We test our approach on image segmentation application. The final issue addressed in this thesis is the clustering without a prior knowledge of the clusters number. We propose an elitist evolutionary approach, where we apply several evolutionary algorithms (EAs) at the same time, to find the optimal combination of initial clusters seed and in the same time the optimal clusters number. The various tests performed on several sets of large data are very promising and demonstrate the effectiveness of the proposed approaches
Heritier-Pingeon, Christine. "Une aide à la conception de systèmes de production basée sur la simulation et l'analyse de données." Lyon, INSA, 1991. http://tel.archives-ouvertes.fr/docs/00/84/01/51/PDF/1991_Heritier-Pingeon_Christine.pdf.
New forms of competition are leading manufacturing systems to more and more flexibility. In the case of highly automated systems, decisions taken in the design phase will have a great influence on the possibilities of the future system and also on its ease of adaptation to changes, and thus on its degree of flexibility. This work is a study of methods and tools for decision support in the design of manufacturing systems. The reader is first introduced to the scope and then to the tools and methods employed. The workshop 's model which is used as a support for the approach is then presented and the construction of a simulation plan considered These considerations are then put into a concrete form by defining an automated generation module for simulation plans which are associated to the chosen workshop model. Data analysis which is used as a knowledge acquisition method is considered a method of analysis is proposed and tested. This work was developed to explore data analysis possibilities in this field and to evaluate these possibilities on the base of numerous experiments
Costache, Mihai. "Support vector machines et méthodes bayésiennes pour l'apprentissage sémantique fondé sur des catégories : recherche dans les bases de données d'imagerie satellitaire." Paris, ENST, 2008. http://www.theses.fr/2008ENST0026.
Nowadays large volumes of multimedia data are available with the source being represented by different human activity domains such as photography, television channels, remote sensing applications, etc. For all these data there is a clear need for tools and methods which can allow an optimal data organisation so that the access to the content can be done in a fast and efficient manner. The number of operational EO satellites increases every year and generates an explosion of the acquired data volume. Nowadays, for instance, on average somewhere between 10 to 100 Gigabytes of image data are recorded daily on regular basis by the available optical and Synthetic Aperture Radar (SAR) sensors on board of the EO satellites. ESA's Environmental Satellite, Envisat, deployed in 2002 collects per year around 18 Terabytes of multisensor data. This leads to a total of about 10 Terabytes of annually recorded data volume which is representing a huge volume of data to be processed and interpreted. The generated data volume in classical remote sensing applications, are treated manually by specialised experts for each domain of application. However, this type of analysis is costly and time consuming. Moreover, it allows the processing of only a small fraction of the available data as the user-based image interpretation is done at a greatly lower pace that the one in which the recorded images are sent to the ground stations. In order to cope with these two major aspects in remote sensing, there is a clear need for highly efficient search tools for EO image archives and for search mechanisms able to identify and recognise structures within EO images; moreover these systems should be fast and work with high precision. Such a system should automatically perform classification of the available digital image collection, based on a previous training, which was supervised by an expert. In this way, the image database is better organized and images of interest can be identifed more easily than just by employing a manual expert image interpretation. The task is to infer knowledge, by means of human-machine interaction, from EO image archives. The human-machine interaction enables the transfer of human expertise to the machine by means of knowledge inference algorithms which interpret the human decision and to translate it into conceptual levels. In this way, the EO image information search and extraction process is automated, allowing a fast reponse adapted to human queries
Carles, Olivier. "Système de génération automatique de bases de données pour la simulation de situations de conduite fondée sur l'interaction de ses différents acteurs." Toulouse 3, 2001. http://www.theses.fr/2001TOU30160.
Mahmoudysepehr, Mehdi. "Modélisation du comportement du tunnelier et impact sur son environnement." Thesis, Centrale Lille Institut, 2020. http://www.theses.fr/2020CLIL0028.
This PhD thesis research work consists in understanding the behavior of the TBM according to the environment encountered in order to propose safe, durable and quality solutions for the digging of the tunnel.The main objective of this doctoral thesis work is to better understand the behavior of the TBM according to its environment. Thus, we will explore how the TBM reacts according to the different types of terrain and how it acts on the various elements of tunnel structure (voussoirs). This will make it possible to propose an intelligent and optimal dimensioning of the voussoirs and instructions of adapted piloting
Codaccioni, Marc. "Évaluation de l’exposition fœtale aux substances chimiques grâce à la modélisation pharmacocinétique basée sur la physiologie (PBPK) et son application aux données d’imprégnation des populations." Thesis, Paris, Institut agronomique, vétérinaire et forestier de France, 2020. http://www.theses.fr/2020IAVF0019.
Numerous biomonitoring studies have shown the exposure of pregnant women to synthetic substances. In parallel, several epidemiological studies have highlighted associations between maternal blood concentrations measured during pregnancy or cord blood concentrations measured at birth and adverse effects in the offspring at birth or later in life. However, this type of measurements does not guarantee being representative of in utero exposures throughout pregnancy. Furthermore, it is not possible to measure longitudinally fetal concentrations due to obvious ethical reasons. Pregnancy physiologically-based pharmacokinetic (pPBPK) models allow the simulation of xenobiotic internal exposures in different maternal and fetal organs during gestation. Therefore, they offer an opportunity to better estimate the relationship between the dose and the risk of a toxic effect by considering tissue dosimetry. Although pPBPK models often incorporate physiological changes associated with pregnancy, some processes are still poorly known such as placental transfer (PT). The aim of the thesis is to improve the integration of PT in pPBPK modelling in order to predict fetal internal exposures from biomonitoring data.First, a scientific literature review of the published pPBPK models was conducted with a focus on the various model structures used to describe PT. It allowed the identification of 12 structures among 50 original models which corresponded to 4 types of kinetic profiles according to the number of transfer constants. Animal in vivo data were identified as the main source to support their parameterization although they cannot be directly extrapolated to humans and imply the killing of numerous animals. From this basis, we developed a pPBPK model which integrated four transfer models calibrated using non-animal methods so as to assess their performance to predict the fetal dosimetry on a set of ten substances. Our results show that the performance varied among models and substances, preventing the identification of a reference predictive model. Monte-Carlo simulations showed that one of the transfer models differed from the others in terms of fetal exposure variation across trimesters. Finally, a global sensitivity analysis shed light on a great extent of influence of the transfer constants as well as the metabolic clearance and fraction unbound, to a lesser extent, on simulated fetal exposure. The last part of the thesis consisted in applying the developed pPBPK model to estimate the internal fetal concentrations of two PCB and two PBDE substances from observed maternal plasma concentrations taken from the French ELFE cohort. To that end, we selected a specific PT model for each compound based on the prediction of fetal to maternal concentrations ratio at term. The ranking of chemicals based on the simulated exposure indicators varied between mother and fetus at term, as between the first and the other two trimesters in fetal plasma.In conclusion, this work highlights the potential of pPBPK modelling in the prenatal exposures assessment. It demonstrates the ability of a model to simulate adequate internal exposure indicators from a mechanistic and temporal points of view, notably from biomonitoring data. Furthermore, in light of strong ethical and regulatory constraints, this work indicates the role of alternative methods in the parameterization of key processes of the internal fetal dose such as the transplacental passage. This work could be used for the assessment of the prenatal exposome as well as in the developmental toxicity risk assessment of a substance
Ars, Sébastien. "Caractérisation des émissions de méthane à l'échelle locale à l'aide d'une méthode d'inversion statistique basée sur un modèle gaussien paramétré avec les données d'un gaz traceur." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV030/document.
The increase of atmospheric methane concentrations since the beginning of the industrial era is directly linked to anthropogenic activities. This increase is partly responsible for the enhancement of the greenhouse effect leading to a rise of Earth's surface temperatures and a degradation of air quality. There are still considerable uncertainties regarding methane emissions estimates from many sources at local scale. A better characterization of these sources would help the implementation of effective adaptation and mitigation policies to reduce these emissions.To do so, we have developed a new method to quantify methane emissions from local sites based on the combination of mobile atmospheric measurements, a Gaussian model and a statistical inversion. These atmospheric measurements are carried out within the framework of the tracer method, which consists in emitting a gas co-located with the methane source at a known flow. An estimate of methane emissions can be given by measuring the tracer and methane concentrations through the emission plume coming from the site. This method presents some limitations especially when several sources and/or extended sources can be found on the studied site. In these conditions, the colocation of the tracer and methane sources is difficult. The Gaussian model enables to take into account this bad collocation. It also gives a separate estimate of each source of a site when the classical tracer release method only gives an estimate of its total emissions. The statistical inversion enables to take into account the uncertainties associated with the model and the measurements.The method is based on the use of the measured tracer gas concentrations to choose the stability class of the Gaussian model that best represents the atmospheric conditions during the measurements. These tracer data are also used to parameterize the error associated with the measurements and the model in the statistical inversion. We first tested this new method with controlled emissions of tracer and methane. The tracer and methane sources were positioned in different configurations in order to better understand the contributions of this method compared to the traditional tracer method. These tests have demonstrated that the statistical inversion parameterized by the tracer gas data gives better estimates of methane emissions when the tracer and methane sources are not perfectly collocated or when there are several sources of methane.In a second time, I applied this method to two sites known for their methane emissions, namely a farm and a gas distribution facility. These measurements enabled us to test the applicability and robustness of the method under more complex methane source distribution conditions and gave us better estimates of the total methane emissions of these sites that take into account the location of the tracer regarding methane sources. Separate estimates of every source within the site are highly dependent on the meteorological conditions during the measurements. The analysis of the correlations on the posterior uncertainties between the different sources gives a diagnostic of the separability of the sources.Finally I focused on methane emissions associated with the waste sector. To do so, I carried out several measurement campaigns in landfills and wastewater treatment plants and I also used data collected on this type of sites during other projects. I selected the most suitable method to estimate methane emissions of each site and the obtained estimates for each one of these sites show the variability of methane emissions in the waste sector
Bernard, Anne. "Développement de méthodes statistiques nécessaires à l'analyse de données génomiques : application à l'influence du polymorphisme génétique sur les caractéristiques cutanées individuelles et l'expression du vieillissement cutané." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00925074.
Georgescu, Vera. "Classification de données multivariées multitypes basée sur des modèles de mélange : application à l'étude d'assemblages d'espèces en écologie." Phd thesis, Université d'Avignon, 2010. http://tel.archives-ouvertes.fr/tel-00624382.
Bas, Patrick. "Méthodes de tatouage d'images basées sur le contenu." Grenoble INPG, 2000. http://www.theses.fr/2000INPG0089.
Ghoumari, Asmaa. "Métaheuristiques adaptatives d'optimisation continue basées sur des méthodes d'apprentissage." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1114/document.
The problems of continuous optimization are numerous, in economics, in signal processing, in neural networks, and so on. One of the best-known and most widely used solutions is the evolutionary algorithm, a metaheuristic algorithm based on evolutionary theories that borrows stochastic mechanisms and has shown good performance in solving problems of continuous optimization. The use of this family of algorithms is very popular, despite the many difficulties that can be encountered in their design. Indeed, these algorithms have several parameters to adjust and a lot of operators to set according to the problems to solve. In the literature, we find a plethora of operators described, and it becomes complicated for the user to know which one to select in order to have the best possible result. In this context, this thesis has the main objective to propose methods to solve the problems raised without deteriorating the performance of these algorithms. Thus we propose two algorithms:- a method based on the maximum a posteriori that uses diversity probabilities for the operators to apply, and which puts this choice regularly in play,- a method based on a dynamic graph of operators representing the probabilities of transitions between operators, and relying on a model of the objective function built by a neural network to regularly update these probabilities. These two methods are detailed, as well as analyzed via a continuous optimization benchmark
Bui, Huyen Chi. "Méthodes d'accès basées sur le codage réseau couche physique." Thesis, Toulouse, ISAE, 2012. http://www.theses.fr/2012ESAE0031.
In the domain of satellite networks, the emergence of low-cost interactive terminals motivates the need to develop and implement multiple access protocols able to support different user profiles. In particular, the European Space Agency (ESA) and the German Aerospace Center (DLR) have recently proposed random access protocols such as Contention Resolution Diversity Coded ALOHA (CRDSA) and Irregular Repetition Slotted ALOHA (IRSA). These methods are based on physical-layer network coding and successive interference cancellation in order to attempt to solve the collisions problem on a return channel of type Slotted ALOHA.This thesis aims to provide improvements of existing random access methods. We introduce Multi-Slot Coded Aloha (MuSCA) as a new generalization of CRDSA. Instead of transmitting copies of the same packet, the transmitter sends several parts of a codeword of an error-correcting code ; each part is preceded by a header allowing to locate the other parts of the codeword. At the receiver side, all parts transmitted by the same user, including those are interfered by other signals, are involved in the decoding. The decoded signal is then subtracted from the total signal. Thus, the overall interference is reduced and the remaining signals are more likely to be decoded. Several methods of performance analysis based on theoretical concepts (capacity computation, density evolution) and simulations are proposed. The results obtained show a significant gain in terms of throughput compared to existing access methods. This gain can be even more increased by varying the codewords stamping rate. Following these concepts, we also propose an application of physical-layer network coding based on the superposition modulation for a deterministic access on a return channel of satellite communications. We observe a gain in terms of throughput compared to more conventional strategies such as the time division multiplexing
Abat, Cédric. "Développement de nouveaux outils informatiques de surveillance en temps réel des phénomènes anormaux basés sur les données de microbiologie clinique du laboratoire de la Timone." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM5029/document.
Although considered under control in the second half of the 20th century with the discovery of antimicrobials, infectious diseases remain a serious threat to humanity. Regardless of the state of knowledge we possess on these diseases, all remained unpredictable. To fight this phenomenon, many monitoring strategies have been developed leading to the implementation of various epidemiological surveillance computer programs to detect and identify, as soon as possible, abnormal events including epidemic phenomena. The initial objective of our work was to implement, within the Hospitalo-Universitaire Méditerranée Infection and based on the Microsoft Excel software, two new automated computer-based programs for the weekly automated epidemiological surveillance of abnormal epidemic events using clinical microbiological data from the Timone teaching hospital of of Assistance Publique- Hôpitaux de Marseille (AP-HM). Once completed, we then worked to develop a comprehensive monitoring structure incorporating the investigation and the validation of alarms emitted by the established surveillance systems, the transmission of alerts to the Regional Health Agency (ARS) of the Provence-Alpes Côte d'Azur (PACA), the public dissemination of confirmed abnormal events by publishing scientific articles, and the implementation of feedback and weekly epidemiological bulletins to inform local infectious diseases epidemiological surveillance actors
Larab, Ounissa. "Intégration de schémas de bases de données : méthode de raffinement des relations de correspondance basée sur les logiques de description." Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0123.
The work we present in this thesis is a part of project dealing with the elaboration of federated multi database system designing method. It consists on proposing a semi-automatic method to integrate heterogeneous database schemes (relational, abject oriented, network…). The success of the scheme integration in multi database systems relies heavily on the determination of complete and refined correspondence relationships between them. So, the candidate schemes to be integrated must be rich and precise semantically, i. E. , each of their data elements must be sufficiently defined in order to be distinguished from others or identified to some of them. To reach this goal, we have used the terminological logics (description logics) as a common data model to make uniform the schemes to be integrated. We suppose that the translation of the schemes in terminological logics is already clone. Terminologies are then the entry point of our correspondence refinement process and integration method. To. Refine correspondences between terminologies, we start by a semantic enrichment phase in which we extend the term descriptions by semantic properties. They are additional knowledge related to the local context of the terms or to the global context of the federation. The conjunction of the terminological reasoning of the BACK system (implementing terminological logics) and the semantic properties allowed us to refine correspondences between terminologies, to identify the data elements representing the same semantics and then to solve their schematic differences before integrating them
Cruz, Rodriguez Lidice. "Méthodes de dynamique quantique ultrarapide basées sur la propagation de trajectoires." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30254/document.
In this thesis different trajectory-based methods for the study of quantum mechanical phenomena are developed. The first approach is based on a global expansion of the hydrodynamic fields in Chebyshev polynomials. The scheme is used for the study of one-dimensional vibrational dynamics of bound wave packets in harmonic and anharmonic potentials. Furthermore, a different methodology is developed, which, starting from a parametrization previously proposed for the density, allows the construction of effective interaction potentials between the pseudo-particles representing the density. Within this approach several model problems are studied and important quantum mechanical effects such as, zero point energy, tunneling, barrier scattering and over barrier reflection are founded to be correctly described by the ensemble of interacting trajectories. The same approximation is used for study the laser-driven atom ionization. A third approach considered in this work consists in the derivation of an approximate many-body quantum potential for cryogenic Ar and Kr matrices with an embedded Na impurity. To this end, a suitable ansatz for the ground state wave function of the solid is proposed. This allows to construct an approximate quantum potential which is employed in molecular dynamics simulations to obtain the absorption spectra of the Na impurity isolated in the rare gas matrix
Verney, Philippe. "Interprétation géologique de données sismiques par une méthode supervisée basée sur la vision cognitive." Phd thesis, École Nationale Supérieure des Mines de Paris, 2009. http://pastel.archives-ouvertes.fr/pastel-00005861.
Do, Van Huyen. "Les méthodes d'interpolation pour données sur zones." Thesis, Toulouse 1, 2015. http://www.theses.fr/2015TOU10019/document.
The combination of several socio-economic data bases originating from different administrative sources collected on several different partitions of a geographic zone of interest into administrative units induces the so called areal interpolation problem. This problem is that of allocating the data from a set of source spatial units to a set of target spatial units. At the European level for example, the EU directive ’INSPIRE’, or INfrastructure for Spatial InfoRmation, encourages the states to provide socio-economic data on a common grid to facilitate economic studies across states. In the literature, there are three main types of such techniques: proportional weighting schemes, smoothing techniques and regression based interpolation. We propose a theoretical evaluation of these statistical techniques for the case of count related data. We find extensions of some of these methods to new cases : for example, we extend the ordinary dasymetric weightingmethod to the case of an intensive target variable Y and an extensive auxiliary quantitative variable X and we introduce a scaled version of the Poisson regression method which satisfies the pycnophylactic property. We present an empirical study on an American database as well as an R-package for implementing these methods
Paindavoine, Marie. "Méthodes de calculs sur les données chiffrées." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1009/document.
Nowadays, encryption and services issued of ``big data" are at odds. Indeed, encryption is about protecting users privacy, while big data is about analyzing users data. Being increasingly concerned about security, users tend to encrypt their sensitive data that are subject to be accessed by other parties, including service providers. This hinders the execution of services requiring some kind of computation on users data, which makes users under obligation to choose between these services or their private life. We address this challenge in this thesis by following two directions.In the first part of this thesis, we study fully homomorphic encryption that makes possible to perform arbitrary computation on encrypted data. However, this kind of encryption is still inefficient, and this is due in part to the frequent execution of a costly procedure throughout evaluation, namely the bootstrapping. Thus, efficiency is inversely proportional to the number of bootstrappings needed to evaluate functions on encrypted data. In this thesis, we prove that finding such a minimum is NP-complete. In addition, we design a new method that efficiently finds a good approximation of it. In the second part, we design schemes that allow a precise functionality. The first one is verifiable deduplication on encrypted data, which allows a server to be sure that it keeps only one copy of each file uploaded, even if the files are encrypted, resulting in an optimization of the storage resources. The second one is intrusion detection over encrypted traffic. Current encryption techniques blinds intrusion detection services, putting the final user at risks. Our results permit to reconcile users' right to privacy and their need of keeping their network clear of all intrusion
Silveira, Filho Geraldo. "Contributions aux méthodes directes d'estimation et de commande basées sur la vision." Phd thesis, École Nationale Supérieure des Mines de Paris, 2008. http://pastel.archives-ouvertes.fr/pastel-00005340.
Moalla, Koubaa Ikram. "Caractérisation des écritures médiévales par des méthodes statistiques basées sur la cooccurrences." Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0128/these.pdf.
[The purpose of this work consists to elaborate methodologies to describe and to compare a ancient handwritten writings. The developed image feature are global and do not require any segmentation. It proposes new robust features based on second order statistics. We introduce the generalized co-occurrence matrix concept which measures the joint probability of any information from the images. This new statistical measure in an extension of the grey level co-occurrence matrix used until now to characterize the textures. We propose spatial co-occurrence matrix relative to the orientations and to the local curvatures of the forms, and parametric matrices which measure the evolution of an image under successive transformations. Because the obtained number of descriptors is very high, we suggest designed methods using eigen co-occurrence matrices in order to reduce this number. In this application part, we propose some clustering methods of medieval writings to test our propositions. The number of groups and their contents depend on used parameters and on applied methods. We also developed a Content Based Image Retrieval system to search for similar writings. Within the framework of the project ANR-MCD Graphem, we elaborate methods to analyse and to observe the evolution of the writings of the Middle Ages. ]
Le, Cornec Kergann. "Apprentissage Few Shot et méthode d'élagage pour la détection d'émotions sur bases de données restreintes." Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC034.
Emotion detection plays a major part in human interactions, a goodunderstanding of the speaker's emotional state leading to a betterunderstanding of his speech. It is de facto the same in human-machineinteractions.In the area of emotion detection using computers, deep learning hasemerged as the state of the art. However, classical deep learningtechnics perform poorly when training sets are small. This thesis explores two possible ways for tackling this issue, pruning and fewshot learning.Many pruning methods exist but focus on maximising pruning withoutlosing too much accuracy.We propose a new pruning method, improving the choice of the weightsto remove. This method is based on the rivalry of two networks, theoriginal network and a network we name rival.The idea is to share weights between both models in order to maximisethe accuracy. During training, weights impacting negatively the accuracy will be removed, thus optimising the architecture while improving accuracy. This technic is tested on different networks as well asdifferent databases and achieves state of the art results, improvingaccuracy while pruning a significant percentage of weights.The second area of this thesis is the exploration of matching networks(both siamese and triple), as an answer to learning on small datasets.Sounds and Images were merged to learn their main features, in orderto detect emotions.We show that, while restricting ourselves to 200 training instancesfor each class, triplet network achieves state of the art (trained on hundreds of thousands instances) on some databases.We also show that, in the area of emotion detection, triplet networksprovide a better vectorial embedding of the emotions thansiamese networks, and thusdeliver better results.A new loss function based on triplet loss is also introduced, facilitatingthe training process of the triplet and siamese networks. To allow abetter comparison of our model, different methods are used to provideelements of validation, especially on the vectorial embedding.In the long term, both methods can be combined to propose lighter and optimised networks. As thenumber of parameters is lowered by pruning, the triplet network shouldlearn more easily and could achieve better performances
Marteau, Hubert. "Une méthode d'analyse de données textuelles pour les sciences sociales basée sur l'évolution des textes." Tours, 2005. http://www.theses.fr/2005TOUR4028.
This PhD Thesis aims at bringing to sociologists a data-processing tool wich allows them to analyse of semi-directing open talks. The proposed tool performs in two steps : an indexation of the talks followed by a classification. Usually, indexing methods rely on a general stastistical analysis. Such methods are suited for texts having contents and structure ( literary texts, scientific texts,. . . ). These texts have more vocabulary and structure than talks (limitation to 1000 words for suche texts). On the basis of the assumption that the sociological membership strongly induces the form of the speech, we propose various methods to evaluate the structure and the evolution of the texts. The methods attempt to find new representations of texts (image, signal) and to extract values from these new representations. Selected classification is a classification by trees (NJ). It has a low complexity and it respects distances, then this method is a good solution to provide a help to classification
Ahmed, Mohamed Salem. "Contribution à la statistique spatiale et l'analyse de données fonctionnelles." Thesis, Lille 3, 2017. http://www.theses.fr/2017LIL30047/document.
This thesis is about statistical inference for spatial and/or functional data. Indeed, weare interested in estimation of unknown parameters of some models from random or nonrandom(stratified) samples composed of independent or spatially dependent variables.The specificity of the proposed methods lies in the fact that they take into considerationthe considered sample nature (stratified or spatial sample).We begin by studying data valued in a space of infinite dimension or so-called ”functionaldata”. First, we study a functional binary choice model explored in a case-controlor choice-based sample design context. The specificity of this study is that the proposedmethod takes into account the sampling scheme. We describe a conditional likelihoodfunction under the sampling distribution and a reduction of dimension strategy to definea feasible conditional maximum likelihood estimator of the model. Asymptotic propertiesof the proposed estimates as well as their application to simulated and real data are given.Secondly, we explore a functional linear autoregressive spatial model whose particularityis on the functional nature of the explanatory variable and the structure of the spatialdependence. The estimation procedure consists of reducing the infinite dimension of thefunctional variable and maximizing a quasi-likelihood function. We establish the consistencyand asymptotic normality of the estimator. The usefulness of the methodology isillustrated via simulations and an application to some real data.In the second part of the thesis, we address some estimation and prediction problemsof real random spatial variables. We start by generalizing the k-nearest neighbors method,namely k-NN, to predict a spatial process at non-observed locations using some covariates.The specificity of the proposed k-NN predictor lies in the fact that it is flexible and allowsa number of heterogeneity in the covariate. We establish the almost complete convergencewith rates of the spatial predictor whose performance is ensured by an application oversimulated and environmental data. In addition, we generalize the partially linear probitmodel of independent data to the spatial case. We use a linear process for disturbancesallowing various spatial dependencies and propose a semiparametric estimation approachbased on weighted likelihood and generalized method of moments methods. We establishthe consistency and asymptotic distribution of the proposed estimators and investigate thefinite sample performance of the estimators on simulated data. We end by an applicationof spatial binary choice models to identify UADT (Upper aerodigestive tract) cancer riskfactors in the north region of France which displays the highest rates of such cancerincidence and mortality of the country
Gayraud, Nathalie. "Méthodes adaptatives d'apprentissage pour des interfaces cerveau-ordinateur basées sur les potentiels évoqués." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4231/document.
Non-invasive Brain Computer Interfaces (BCIs) allow a user to control a machine using only their brain activity. The BCI system acquires electroencephalographic (EEG) signals, characterized by a low signal-to-noise ratio and an important variability both across sessions and across users. Typically, the BCI system is calibrated before each use, in a process during which the user has to perform a predefined task. This thesis studies of the sources of this variability, with the aim of exploring, designing, and implementing zero-calibration methods. We review the variability of the event related potentials (ERP), focusing mostly on a late component known as the P300. This allows us to quantify the sources of EEG signal variability. Our solution to tackle this variability is to focus on adaptive machine learning methods. We focus on three transfer learning methods: Riemannian Geometry, Optimal Transport, and Ensemble Learning. We propose a model of the EEG takes variability into account. The parameters resulting from our analyses allow us to calibrate this model in a set of simulations, which we use to evaluate the performance of the aforementioned transfer learning methods. These methods are combined and applied to experimental data. We first propose a classification method based on Optimal Transport. Then, we introduce a separability marker which we use to combine Riemannian Geometry, Optimal Transport and Ensemble Learning. Our results demonstrate that the combination of several transfer learning methods produces a classifier that efficiently handles multiple sources of EEG signal variability
Gueguen, Juliette. "Evaluation des médecines complémentaires : quels compléments aux essais contrôlés randomisés et aux méta-analyses ?" Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS072/document.
Complementary medicines are numerous and varied, their use is widespread and increasing.Quality and quantity of evaluation data depend on the type of complementary medicines, but there are few consensual conclusions about their effectiveness, even in the case of abundant literature. We will start with an inventory of the adequacy of the conventional methods used for drug evaluation, namely randomized controlled trials (RCT) and meta-analyzes, for the evaluation of complementary medicines. Through three practical applications, we will then consider the contribution of other methods, less recognized to date in the field of evidence-based medicine but potentially contributive to shed light on other perspectives. In particular, we will discuss the advantages of mixed methods, qualitative studies and the exploitation of large health administrative databases. We will conduct a mixed-method review of the assessment of hypnosis for labor and childbirth, a qualitative study on the experience of qi gong by patients hospitalized for severe anorexia nervosa and we will study the potential of the French national health insurance database (SNIIRAM) to evaluate complementary medicines. The first two axis will lead us to question the choice of outcomes and measurement tools used in RCTs and to value and legitimate the patient's perspective. More broadly, it will invite us to question the hierarchical vision of qualitative and quantitative research that traditionally attributes supremacy to quantitative studies. It will encourage us to replace it with a synergistic vision of qualitative and quantitative approaches. The third axis will enable us to identify the current limits to the use of SNIIRAM for the evaluation of complementary medicines, both technically and in terms of representativeness. We will propose concrete measures to make its exploitation possible and relevant in the field of evaluation of complementary medicines.Finally, in the general discussion, we shall take account of the fact that the evaluation of complementary medicines is not part of a marketing authorization process. Thus, contrary to drug evaluation, complementary medicines evaluation does not always imply decision making. We will emphasize the importance of considering the aim (aim of knowledge or aim of decision) in the development of a research strategy. We will propose two different strategies based on the literature and the results from our three examples. Concerning the research strategy aimed at decision-making, we will show the importance of defining the intervention, identifying the relevant outcomes, and optimizing the intervention first, before carrying out pragmatic clinical trials to evaluate its effectiveness. We will discuss the regulatory challenges complementary medicine evaluation confronts us to, and stress the need to assess the safety of these practices by developing appropriate monitoring systems
Cuevas, Vicenttin Victor. "Evaluation des requêtes hybrides basées sur la coordination des services." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00630601.
Aboa, Yapo Jean-Pascal. "Méthodes de segmentation sur un tableau de variables aléatoires." Paris 9, 2002. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2002PA090042.
Ben, Othman Amroussi Leila. "Conception et validation d’une méthode de complétion des valeurs manquantes fondée sur leurs modèles d’apparition." Caen, 2011. http://www.theses.fr/2011CAEN2067.
Knowledge Discovery from incomplete databases is a thriving research area. In this thesis, the main focus is put on the proposal of a missing values completion method. We start approaching this issue by defining the appearing models of the missing values. We thus propose a new typology according to the given data and we characterize these missing values in a non-redundant manner defined by means of the basis of proper implications. An algorithm computing this basis of rules, heavily relying on the hypergraph theory battery of results, is also introduced in this thesis. We then explore the information provided during the characterization stage in order to propose a new contextual completion method. The latter completes the missing values with respect to their type as well as to their appearance context. The non-random missing values are completed with special values intrinsically containing the explanation defined by the characterization schemes. Finally, we investigate the evaluation techniques of the missing values completion methods and we introduce a new technique based on the stability of a clustering, when applied on reference data and completed ones
Tang, Ahanda Barnabé. "Extension des méthodes d'analyse factorielle sur des données symboliques." Paris 9, 1998. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1998PA090047.
Ban, Tian. "Méthodes et architectures basées sur la redondance modulaire pour circuits combinatoires tolérants aux fautes." Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00933194.
Messine, Frédéric. "Méthodes d'optimisation globale basées sur l'analyse d'intervalle pour la résolution de problèmes avec contraintes." Toulouse, INPT, 1997. http://www.theses.fr/1997INPT082H.
Sossa, Hasna El. "Quelques méthodes d'éléments finis mixtes raffinées basées sur l'utilisation des champs de Raviart-Thomas." Valenciennes, 2001. https://ged.uphf.fr/nuxeo/site/esupversions/6f5acb08-fa86-418a-bee2-65cc41f30556.
In this work, we study the refinement of mesh for the mixed finite elements methods and this for two types of problems: the first concerns the problem of Laplace and the second the problem of Strokes. For these two types of problems and in nonregular domains, the methods analysed until now, are those which relate to “traditional” mixed formulations such as the velocity-pressure formulation for the Strokes problem. Here, we analyse, for the Laplace equation, the dual mixed formulation in (p : = grad u, u) and for the system of Strokes, the dual mixed formulation in ((o := grad u,p) , u). For the problem of Laplace, we approximate on each triangle K of the triangulation p by a Raviart-Thomas vectorfield of degree 0 (resp. Of degree 1) and u by a constant on each triangle K (resp. By a polynomial of degree 1). To recapture convergence of order 1, we must use a refinement of the meshes according to Raugel method. Then we treat the case of finite elements of quadrilateral type and we propose appropriate regular family of quadrangulations, in order to obtain the optimal order of convergence. We investigate next the system of Strokes. We approximate on each triangle K each of the two lines of the tensor o by a Raviart-Thomas vectorfields of degree 0 (resp. 1), the pressure p by a constant (resp. By a polynomial of degree 1) and u by constant vectorfields (resp. By fieldvectors whose each component is a polynomial of degree of 1). Using, an appropriate refinement mesh of Raugel’s type, we obtain an error estimate of order h (resp. Of order h²), similar to those in the regular case. Finally we treat finite elements of the quadrilateral type. We use analogous refined family of quadrangulations as proposed for the problem of Laplace, to obtain optimal order of convergence
Camelin, Nathalie. "Stratégies robustes de compréhension de la parole basées sur des méthodes de classification automatique." Avignon, 2007. http://www.theses.fr/2007AVIG0149.
The work presented in this PhD thesis deals with the automatic Spoken Language Understanding (SLU) problem in multiple speaker applications which accept spontaneous speech. The study consists in integrating automatic classification methods in the speech decoding and understanding processes. My work consists in adapting methods, wich have already shown good performance in text domain, to the particularities of an Automatic Speech Recognition system outputs. The main difficulty of the process of this type of data is due to the uncertainty in the input parameters for the classifiers. Among all existing automatic classification methods, we choose to use three of them. The first is based on Semantic Classification Trees, the two others classification methods, considered among the most performant in the scientific community of machine learning, are large margin ones based on boosting and support vector machines. A sequence labelling method, Conditional Random Fields (CRF), is also studied and used. Two applicative frameworks are investigated : -PlanResto is a tourism application of human-computer dialogue. It enables users to ask information about a restaurant in Paris in natural language. The real-time speech understanding process consists in building a request for a database. Within this framework, the consensual agreement of the different classifiers, considered as semantic experts, is used as a confidence measure ; -SCOrange is a spoken telephone survey corpus. The purpose is to collect messages of mobile users expressing their opinion about the customer service. The off-line speech understanding process consists in evaluating proportions of opinions about a topic and a polarity. Classifiers enable the extraction of user's opinions in a strategy that can reliably evalute the distribution of opinions and their temporal evolution
Truc, Loïc. "Développement et application d'une méthode de reconstitution paléoclimatique quantitative basée sur des données polliniques fossiles en Afrique australe." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20200/document.
Located at the interface between tropical and temperate climate systems, southern Africa is a particularly sensitive region in terms of long-term climate change. However, few reliable paleoclimatic records exist from the region – largely as a result of the arid climate with precludes the preservation of wetland sequences - , and virtually no quantitative reconstructions are available.The aim of this thesis is to develop quantitative palaeoclimate reconstruction method based the relation between modern plant distributions and climate in southern Africa. We develop botanical-climatological transfer functions derived from probability density functions (pdfs), allowing for quantitative estimates of the palaeoclimatic variables to be calculated from fossil pollen assemblages. In addition, a species-selection method (SSM) based on Bayesian statistics is outlined, which provides a parsimonious choice of most likely plant species from what are otherwise taxonomically broad pollen-types. This method addresses limitations imposed by the low taxonomic resolution of pollen identification, which is particularly problematic in areas of high biodiversity such as many regions of southern Africa.This methodology has been applied to pollen record from Wonderkrater (South Africa). Results indicate that temperatures during both the warm and cold season were 6±2°C colder during the Last Glacial Maximum and Younger Dryas, and that rainy season precipitation during the Last Glacial Maximum was ~50% of that during the mid-Holocene. Our results also imply that changes in precipitation at Wonderkrater generally track changes in Mozambique Channel sea-surface temperatures, with a steady increase following the Younger Dryas to a period of maximum water availability at Wonderkrater ~3-7 ka. These findings indicate that the northern and southern tropics experienced similar climatic trends during the last 20 kyr, and highlight the role of variations in sea-surface temperatures over the more popularly perceived role of a shifting Intertropical Convergence Zone in determining long-term environmental trends.This method has also been applied to a pollen record from Pakhuis Pass, in the Fynbos Biome (South Africa). Results show the limitations of quantitative methods, with only unrealistically low amplitude being reconstructed between the Last Glacial Maximum and Holocene (~2°C). However, results indicate that the reconstructed temperature trends, if not amplitudes, are similar to trends observed in Antarctic ice core records. Further, in reconstructing past humidity, we show that over the last 18 kyr, cooler conditions appear to be generally wetter at the site. These results are consistent with Cockcroft model (1987), derived from equatorward shift of the westerlies resulting from expansions of the circum-polar vortex.This study shows the potential of using modern plant distributions to estimates past climate parameters in southern Africa, and the species selection method proves to be a useful tool in region with high biodiversity. This work provides a novel perspective in the region, where no quantitative paleoclimatic reconstructions have been available. However, results from Pakhuis Pass highlight some of the limitations of this methodology, which will be subject of future work in this promising field of inquiry
Coq, Guilhelm. "Utilisation d'approches probabilistes basées sur les critères entropiques pour la recherche d'information sur supports multimédia." Poitiers, 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Coq-Guilhelm/2008-Coq-Guilhelm-These.pdf.
Model selection problems appear frequently in a wide array of applicative domains such as data compression and signal or image processing. One of the most used tools to solve those problems is a real quantity to be minimized called information criterion or penalized likehood criterion. The principal purpose of this thesis is to justify the use of such a criterion responding to a given model selection problem, typically set in a signal processing context. The sought justification must have a strong mathematical background. To this end, we study the classical problem of the determination of the order of an autoregression. . We also work on Gaussian regression allowing to extract principal harmonics out of a noised signal. In those two settings we give a criterion the use of which is justified by the minimization of the cost resulting from the estimation. Multiple Markov chains modelize most of discrete signals such as letter sequences or grey scale images. We consider the determination of the order of such a chain. In the continuity we study the problem, a priori distant, of the estimation of an unknown density by an histogram. For those two domains, we justify the use of a criterion by coding notions to which we apply a simple form of the “Minimum Description Length” principle. Throughout those application domains, we present alternative methods of use of information criteria. Those methods, called comparative, present a smaller complexity of use than usual methods but allow nevertheless a precise description of the model
Giraldi, Loïc. "Contributions aux méthodes de calcul basées sur l'approximation de tenseurs et applications en mécanique numérique." Phd thesis, Ecole centrale de nantes - ECN, 2012. http://tel.archives-ouvertes.fr/tel-00861986.
Asse, Abdallah. "Aide au diagnostic industriel par des méthodes basées sur la théorie des sous-ensembles flous." Valenciennes, 1985. https://ged.uphf.fr/nuxeo/site/esupversions/c72e776b-0420-445e-bc8f-063e67804dad.
Legendre, Sylvie. "Méthodes d'inspection non destructive par ultrasons de réservoirs d'hydrogène basées sur la transformée en ondelettes." Thèse, Université du Québec à Trois-Rivières, 2000. http://depot-e.uqtr.ca/6648/1/000671506.pdf.
Nguyen, Van Quang. "Méthodes d'éclatement basées sur les distances de Bregman pour les inclusions monotones composites et l'optimisation." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066183/document.
The goal of this thesis is to design splitting methods based on Bregman distances for solving composite monotone inclusions in reflexive real Banach spaces. These results allow us to extend many techniques that were so far limited to Hilbert spaces. Furthermore, even when restricted to Euclidean spaces, they provide new splitting methods that may be more avantageous numerically than the classical methods based on the Euclidean distance. Numerical applications in image processing are proposed
Sy, Kombossé. "Étude et développement de méthodes de caractérisation de défauts basées sur les reconstructions ultrasonores TFM." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS040/document.
In non-destructive testing, with a view to improving defect images but also to simplify their interpretation by non-specialized operators,new ultrasonic imaging methods such as TFM imaging (Total Focusing Method ) have appeared for some years as an alternative to conventional imaging methods. They offer realistic images of defects and allow from the same acquisition to have a large number of images each that can carry different and complementary information on the characteristics of the same defect. When properly selected, these images are easier to analyze, they present less risk of misinterpretation and allow to consider faster fault characterizations by less specialized operators.However, for an industrial operation, it remains necessary to strengthen the robustness and ease of implementation of these imaging techniques. All the work carried out during the thesis allowed to develop new tools to improve the characterization of defects by TFM imaging techniques in terms of position,orientation and sizing
Horstmann, Tobias. "Méthodes numériques hybrides basées sur une approche Boltzmann sur réseau en vue de l'application aux maillages non-uniformes." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC027/document.
Despite the inherent efficiency and low dissipative behaviour of the standard lattice Boltzmann method (LBM) relying on a two step stream and collide algorithm, a major drawback of this approach is the restriction to uniform Cartesian grids. The adaptation of the discretization step to varying fluid dynamic scales is usually achieved by multi-scale lattice Boltzmann schemes, in which the computational domain is decomposed into multiple uniform subdomains with different spatial resolutions. For the sake of connectivity, the resolution factor of adjacent subdomains has to be a multiple of two, introducing an abrupt change of the space-time discretization step at the interface that is prone to trigger instabilites and generate spurious noise sources that contaminate the expected physical pressure signal. In the present PhD thesis, we first elucidate the subject of mesh refinement in the standard lattice Boltzmann method and point out challenges and potential sources of error. Subsequently, we propose a novel hybrid lattice Boltzmann method (HLBM) that combines the stream and collide algorithm with an Eulerian flux-balance algorithm that is obtained from a finite-volume discretization of the discrete velocity Boltzmann equations. The interest of a hybrid lattice Boltzmann method is the pairing of efficiency and low numerical dissipation with an increase in geometrical flexibility. The HLBM allows for non-uniform grids. In the scope of 2D periodic test cases, it is shown that such an approach constitutes a valuable alternative to multi-scale lattice Boltzmann schemes by allowing local mesh refinement of type H. The HLBM properly resolves aerodynamics and aeroacoustics in the interface regions. A further part of the presented work examines the coupling of the stream and collide algorithm with a finite-volume formulation of the isothermal Navier-Stokes equations. Such an attempt bears the advantages that the number of equations of the finite-volume solver is reduced. In addition, the stability is increased due to a more favorable CFL condition. A major difference to the pairing of two kinetic schemes is the coupling in moment space. Here, a novel technique is presented to inject the macroscopic solution of the Navier-Stokes solver into the stream and collide algorithm using a central moment collision. First results on 2D tests cases show that such an algorithm is stable and feasible. Numerical results are compared with those of the previous HLBM
Charles, Christophe. "SearchXQ : une méthode d'aide à la navigation fondée sur Ω-means, algorithme de classification non-supervisée. Application sur un corpus juridique français". Paris, ENMP, 2004. http://www.theses.fr/2004ENMP1281.
Fauquette, Séverine. "Le climat du pliocène : nouvelle méthode de quantification basée sur les données polliniques et application à la Méditerranée occidentale." Aix-Marseille 3, 1998. http://www.theses.fr/1998AIX30048.