Dissertations / Theses on the topic 'Reconstruction des données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Reconstruction des données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bougleux, Sébastien. "Reconstruction, Détection et Régularisation de Données Discrètes." Phd thesis, Université de Caen, 2007. http://tel.archives-ouvertes.fr/tel-00203445.
Full textGiraudot, Simon. "Reconstruction robuste de formes à partir de données imparfaites." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4024/document.
Full textOver the last two decades, a high number of reliable algorithms for surface reconstruction from point clouds has been developed. However, they often require additional attributes such as normals or visibility, and robustness to defect-laden data is often achieved through strong assumptions and remains a scientific challenge. In this thesis we focus on defect-laden, unoriented point clouds and contribute two new reconstruction methods designed for two specific classes of output surfaces. The first method is noise-adaptive and specialized to smooth, closed shapes. It takes as input a point cloud with variable noise and outliers, and comprises three main steps. First, we compute a novel noise-adaptive distance function to the inferred shape, which relies on the assumption that this shape is a smooth submanifold of known dimension. Second, we estimate the sign and confidence of the function at a set of seed points, through minimizing a quadratic energy expressed on the edges of a uniform random graph. Third, we compute a signed implicit function through a random walker approach with soft constraints chosen as the most confident seed points. The second method generates piecewise-planar surfaces, possibly non-manifold, represented by low complexity triangle surface meshes. Through multiscale region growing of Hausdorff-error-bounded convex planar primitives, we infer both shape and connectivity of the input and generate a simplicial complex that efficiently captures large flat regions as well as small features and boundaries. Imposing convexity of primitives is shown to be crucial to both the robustness and efficacy of our approach
Buslig, Leticia. "Méthodes stochastiques de modélisation de données : application à la reconstruction de données non régulières." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4734/document.
Full textBrishoual, Morgan. "Reconstruction de données : Application à la dosimétrie des radiotéléphones." Phd thesis, INSA de Rennes, 2001. http://tel.archives-ouvertes.fr/tel-00001750.
Full textDes études ont montré qu'une large partie de la puissance absorbée est concentrée dans les tissus proches de l'antenne du radiotéléphone. Ces tissus englobant la région de l'oreille sont les plus exposés aux champs électromagnétiques. A l'interface peau - mobile, les valeurs de champs électriques sont donc importantes et décroissent rapidement à mesure que l'on s'éloigne de l'antenne du radiotéléphone et que l'on se rapproche du centre de la tête. Etant donné que dans la certification des radiotéléphones, on s'intéresse à la puissance maximale absorbée, la région proche de l'antenne est donc primordiale à caractériser.
De par la sonde de mesure du champ électrique utilisée, le point de mesure est placé à quelques millimètres du bout de la sonde et ce décalage ainsi que la fragilité de la sonde ne permettent pas d'effectuer des mesures proches des bords du fantôme utilisé pour modéliser soit une tête d'un utilisateur, soit un rat. Ainsi un schéma d'extrapolation doit être défini. De plus pour accélérer le processus d'acquisition des mesures, ces dernières sont effectuées suivant un échantillonnage grossier et un schéma d'interpolation doit être également défini pour obtenir ces mesures suivant un échantillonnage plus fin.
Cette thèse présente les ondelettes, dans le but de les utiliser dans les schémas d'extrapolation et d'interpolation. Ainsi une méthode à base d'ondelettes, habituellement utilisée en traitement d'image, a été développée dans le cas de la reconstruction de signaux unidimensionnels et étendue à des données tridimensionnelles. Une extension de cette méthode à des fonctions de base interpolatrices et non-interpolatrices a également été réalisée. Il s'est avéré qu'une meilleure précision de reconstruction est obtenue lorsque la fonction de base est une fonction de base interpolatrice de type triangle. Dans chacun des cas, cette méthode a été comparée à d'autres techniques de reconstruction de données échantillonnées, telles que polynomiales, splines, fonctions radiales, réseaux de neurones et krigeage.
Pour la reconstruction des données issues de la dosimétrie des radiotéléphones, les volumes à reconstruire sont volumineux et les techniques nommées précédemment ont un temps de calcul et/ou un coût mémoire non négligeable(s). Si bien qu'une nouvelle méthode pour interpoler et extrapoler les mesures dans un volume a été développée. Cette nouvelle méthode de reconstruction, nommée technique 2D_1D_2D, qui peut être utilisée pour de nombreuses autres applications, est très rapide et peu coûteuse en stockage mémoire. Elle sert de référence dans la norme CENELEC (Comité Européen de Normalisation ELECtrotechnique) ENV50361. Une partie des travaux de cette thèse ont donné naissance à un logiciel sous Matlab.
Bernardes, Vieira Marcelo. "Reconstruction de surfaces à partir de données tridimensionnelles éparses." Cergy-Pontoise, 2002. http://biblioweb.u-cergy.fr/theses/02CERG0145.pdf.
Full textThis work approaches the problem of sparse data spatial organization inference for surface reconstruction. We propose a variant of the voting method developed by Gideon Guy and extended by Mi-Suen Lee. Tensors to represent orientations and spatial influence fields are the main mathematical instruments. These methods have been associated to perceptual grouping problems. However, we observe that their accumulation processes infer sparse data organization. From this point of view, we propose a new strategy for orientation inference focused on surfaces. In contrast with original ideas, we argue that a dedicated method may enhance this inference. The mathematical instruments are adapted to estimate normal vectors: the orientation tensor represents surfaces and influence fields code elliptical trajectories. We also propose a new process for the initial orientation inference which effectively evaluates the sparse data organization. The presentation and critique of Guy's and Lee's works and methodological development of this thesis are conducted by epistemological studies. Objects of different shapes are used in a qualitative evaluation of the method. Quantitative comparisons were prepared with error estimation from several reconstructions. Results show that the proposed method is more robust to noise and variable data density. A method to segment points structured on surfaces is also proposed. Comparative evaluations show a better performance of the proposed method in this application
Coutand, Frédérique. "Reconstruction d'images en tomographie scintigraphique cardiaque par fusion de données." Phd thesis, Université Paris Sud - Paris XI, 1996. http://pastel.archives-ouvertes.fr/pastel-00730942.
Full textSaguin-Sprynski, Nathalie. "Reconstruction de courbes et surfaces à partir de données tangentielles." Grenoble 1, 2007. http://www.theses.fr/2007GRE10108.
Full textMicro-sensors developed at the LETI (like microaccelerometers or micromagnetometers) are able to give some information about their orientation. So if we put an array of sensors on a object, they will give data about the local tangency of the object. This work consists in reconstructing the shape of the object thanks to these sensors information. The shape can be a curve lying in a plane, or a space curve, and then a surface. Then we propose the motion capture of a shape in deformation, i. E. We will equip a curve or a surface with sensors, make movements and deformations with it, and reconstruct it in the same time via data from sensors. There is a lot of applications (medical, aeronautic, multimedia, hobbyist - do -it - yourself applications), and some materials will be experimented in the same time to test and validate these algorithms
Blondel-Couprie, Elise. "Reconstruction et prévision déterministe de houle à partir de données mesurées." Phd thesis, Ecole centrale de nantes - ECN, 2009. http://tel.archives-ouvertes.fr/tel-00449343.
Full textBlondel-Couprie, Élise. "Reconstruction et prévention déterministe de houle à partir de données mesurées." Nantes, 2009. http://www.theses.fr/2009NANT2074.
Full textWave prediction is a crucial task for offshore operations from an obvious security point of view regarding working people and technical equipments or structures. The prediction tools available since now are based on a stochastic description of the sea state and are not able to predict deterministically the wave fields evolution. Only averaged statistical data representative of the sea state can be obtained from the known spectral quantities. To face the growing need of accurate short term predictions, a deterministic prediction model has been developed in order to improve the efficiency of the sea operations which require a precise knowledge of the sea surface on a specific region of interest. After achieving a theoretical study to determine the available time-space predictable domain depending on the current sea state and on the measurement conditions, we created two data assimilation process to combine the measured observations to the physics-based model. This model is a second order model for low to moderate steeped fields, or a highorder model for high crested seas, namely the High-Order Spectral numerical method. The extended second order model and the third order model using the HOS have been validated for the prediction of 2D synthetic and basin wave fields: the averaged prediction errors we obtain are more than two times less than the errors returned by a linear approach. The improvement is also more important that the steepness and the order of the prediction model are high
Chen, Guangshuo. "Human Habits Investigation : from Mobility Reconstruction to Mobile Traffic Prediction." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX026/document.
Full textThe understanding of human behaviors is a central question in multi-disciplinary research and has contributed to a wide range of applications. The ability to foresee human activities has essential implications in many aspects of cellular networks. In particular, the high availability of mobility prediction can enable various application scenarios such as location-based recommendation, home automation, and location-related data dissemination; the better understanding of mobile data traffic demand can help to improve the design of solutions for network load balancing, aiming at improving the quality of Internet-based mobile services. Although a large and growing body of literature has investigated the topic of predicting human mobility, there has been little discussion in anticipating mobile data traffic in cellular networks, especially in spatiotemporal view of individuals.For understanding human mobility, mobile phone datasets, consisting of Charging Data Records (CDRs), are a practical choice of human footprints because of the large-scale user populations and the vast diversity of individual movement patterns. The accuracy of mobility information granted by CDR depends on the network infrastructure and the frequency of user communication events. As cellular network deployment is highly irregular and interaction frequencies are typically low, CDR data is often characterized by spatial and temporal sparsity, which, in turn, can bias mobility analyses based on such data and cause the loss of whereabouts in individual trajectories.In this thesis, we present novel solutions of the reconstruction of individual trajectories and the prediction of individual mobile data traffic. Our contributions address the problems of (1) overcoming the incompleteness of mobility information for the use of mobile phone datasets and (2) predicting future mobile data traffic demand for the support of network management applications.First, we focus on the flaw of mobility information in mobile phone datasets. We report on an in-depth analysis of its effect on the measurement of individual mobility features and the completeness of individual trajectories. In particular, (1) we provide a confirmation of previous findings regarding the biases in mobility measurements caused by the temporal sparsity of CDR; (2) we evaluate the geographical shift caused by the mapping of user locations to cell towers and reveal the bias caused by the spatial sparsity of CDR; (3) we provide an empirical estimation of the data completeness of individual CDR-based trajectories. (4) we propose novel solutions of CDR completion to reconstruct incomplete. Our solutions leverage the nature of repetitive human movement patterns and the state-of-the-art data inference techniques and outperform previous approaches shown by data-driven simulations.Second, we address the prediction of mobile data traffic demands generated by individual mobile network subscribers. Building on trajectories completed by our developed solutions and data consumption histories extracted from a large-scale mobile phone dataset, (1) we investigate the limits of predictability by measuring the maximum predictability that any algorithm has potential to achieve and (2) we propose practical mobile data traffic prediction approaches that utilize the findings of the theoretical predictability analysis. Our theoretical analysis shows that it is theoretically possible to anticipate the individual demand with a typical accuracy of 75% despite the heterogeneity of users and with an improved accuracy of 80% using joint prediction with mobility information. Our practical based on machine learning techniques can achieve a typical accuracy of 65% and have a 1%~5% degree of improvement by considering individual whereabouts.In summary, the contributions mentioned above provide a step further towards supporting the use of mobile phone datasets and the management of network operators and their subscribers
Agier, Marie. "De l'analyse de données d'expression à la reconstruction de réseau de gènes." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2006. http://tel.archives-ouvertes.fr/tel-00717382.
Full textLabatut, Patrick. "Partition de complexes guidés par les données pour la reconstruction de surface." Phd thesis, Université Paris-Diderot - Paris VII, 2009. http://tel.archives-ouvertes.fr/tel-00844020.
Full textBauchet, Jean-Philippe. "Structures de données cinétiques pour la modélisation géométrique d’environnements urbains." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4091.
Full textThe geometric modeling of urban objects from physical measurements, and their representation in an accurate, compact and efficient way, is an enduring problem in computer vision and computer graphics. In the literature, the geometric data structures at the interface between physical measurements and output models typically suffer from scalability issues, and fail to partition 2D and 3D bounding domains of complex scenes. In this thesis, we propose a new family of geometric data structures that rely on kinetic frameworks. More precisely, we compute partitions of bounding domains by detecting geometric shapes such as line-segments and planes, and extending these shapes until they collide with each other. This process results in light partitions, containing a low number of polygonal cells. We propose two geometric modeling pipelines, one for the vectorization of regions of interest in images, another for the reconstruction of concise polygonal meshes from point clouds. Both approaches exploit kinetic data structures to decompose efficiently either a 2D image domain or a 3D bounding domain into cells. Then, we extract objects from the partitions by optimizing a binary labelling of cells. Conducted on a wide range of data in terms of contents, complexity, sizes and acquisition characteristics, our experiments demonstrate the scalability and the versatility of our methods. We show the applicative potential of our method by applying our kinetic formulation to the problem of urban modeling from remote sensing data
Soulez, Ferréol. "Une approche problèmes inverses pour la reconstruction de données multi-dimensionnelles par méthodes d'optimisation." Phd thesis, Université Jean Monnet - Saint-Etienne, 2008. http://tel.archives-ouvertes.fr/tel-00379735.
Full textL'approche « problèmes inverses » consiste à rechercher les causes à partir des effets ; c'est-à-dire estimer les paramètres décrivant un système d'après son observation. Pour cela, on utilise un modèle physique décrivant les liens de causes à effets entre les paramètres et les observations. Le terme inverse désigne ainsi l'inversion de ce modèle direct. Seulement si, en règle générale, les mêmes causes donnent les mêmes effets, un même effet peut avoir différentes causes et il est souvent nécessaire d'introduire des a priori pour restreindre les ambiguïtés de l'inversion. Dans ce travail, ce problème est résolu en estimant par des méthodes d'optimisations, les paramètres minimisant une fonction de coût regroupant un terme issu du modèle de formation des données et un terme d'a priori.
Nous utilisons cette approche pour traiter le problème de la déconvolution aveugle de données multidimensionnelles hétérogène ; c'est-à-dire de données dont les différentes dimensions ont des significations et des unités différentes. Pour cela nous avons établi un cadre général avec un terme d'a priori séparable, que nous avons adapté avec succès à différentes applications : la déconvolution de données multi-spectrales en astronomie, d'images couleurs en imagerie de Bayer et la déconvolution aveugle de séquences vidéo bio-médicales (coronarographie, microscopie classique et confocale).
Cette même approche a été utilisée en holographie numérique pour la vélocimétrie par image de particules (DH-PIV). Un hologramme de micro-particules sphériques est composé de figures de diffraction contenant l'information sur la la position 3D et le rayon de ces particules. En utilisant un modèle physique de formation de l'hologramme, l'approche « problèmes inverses » nous a permis de nous affranchir des problèmes liées à la restitution de l'hologramme (effet de bords, images jumelles...) et d'estimer les positions 3D et le rayon des particules avec une précision améliorée d'au moins un facteur 5 par rapport aux méthodes classiques utilisant la restitution. De plus, nous avons pu avec cette méthode détecter des particules hors du champs du capteur élargissant ainsi le volume d'intérêt d'un facteur 16.
Pericard, Pierre. "Algorithmes pour la reconstruction de séquences de marqueurs conservés dans des données de métagénomique." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10084/document.
Full textRecent advances in DNA sequencing now allow studying the genetic material from microbial communities extracted from natural environmental samples. This new research field, called metagenomics, is leading innovation in many areas such as human health, agriculture, and ecology. To analyse such samples, new bioinformatics methods are still needed to ascertain the studied community taxonomic composition because accurate organisms identification is a necessary step to understand even the simplest ecosystems. However, current sequencing technologies are generating short and noisy DNA fragments, which only partially cover the complete genes sequences, giving rise to a major challenge for high resolution taxonomic analysis. We developped MATAM, a new bioinformatic methods dedicated to fast reconstruction of low-error complete sequences from conserved phylogenetic markers, starting from raw sequencing data. This methods is a multi-step process that builds and analyses a read overlap graph. We applied MATAM to the reconstruction of the small sub unit ribosomal ARN in simulated, synthetic and genuine metagenomes. We obtained high quality results, improving the state of the art
Grenet, Pierre. "Système de reconstruction cinématique corps entier : fusion et exploitation de données issues de MEMS." Poitiers, 2011. http://theses.univ-poitiers.fr/26762/2011-Grenet-Pierre-These.pdf.
Full textDemocratization of MEMS has enabled the development of attitude control : group of sensors enabling to measure ones orientation relatively to Earth's frame. There are several types of sensors used in attitude control : accelerometers, magnetometers and gyroscopes. Only the combination accelerometers - magnetmeters - gyros allows a satisfying estimation of the kinematics orientation without any a priori knowledge on the mouvement, that is to say, the estimation of the orientation in presence of an unknown acceleration. The aim of the thesis is, within the context of whole body motion capture, to add a priori knowledge in order to reduce the number of gyroscopes used, or even to eliminate them completely. This a priori knowledge is the fact that sensors are attached to a skeleton and so that their relative motions can only be combination of rotations. To test the efficiency of this method, we first apply it to a simple pendulum with one degree of freedom, then a pendulum with three degrees of freedom, then on groups of segments (shoulder - arm - forearm for example) and finally on a whole body system. This thesis develops the theory and the results obtained for different methodologies of resolution
Prades, Gilles. "Lissage et segmentation de données 3D structurées pour la reconstruction de surfaces de style." Valenciennes, 1999. https://ged.uphf.fr/nuxeo/site/esupversions/1ba7c2ab-239e-4d40-96df-a6ad23bc2dba.
Full textLaurent, François. "Détection de la fatigue mentale à partir de données électrophysiologiques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00634776.
Full textYureidini, Ahmed. "Reconstruction robuste des vaisseaux sanguins pour les simulations médicales interactives à partir de données patients." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2014. http://tel.archives-ouvertes.fr/tel-01010973.
Full textGallon, Jonathan. "Étude et optimisation d'un algorithme de reconstruction d'horizons sismiques 3D et extension aux données nD." Pau, 2011. http://www.theses.fr/2011PAUU3009.
Full textIn the oil industrie, 3D seismic data interpretation is an essential step to build a 3D geological model of an exploration area. This model is constructed with horizons (surfaces representing the main rock interfaces) and faults (surface representing rocks' fractures). In order to facilitate interpretation, a lot of automatic tools have been developed to build these surfaces, among which the 3D horizon propagation algorithm developed by Keskes et al. : this algorithm is considered today as the most powerful in term of surface quality. However, with the continuous increasing size of seismic data, one can observe a slowdown in computation time. The purpose of this thesis concerns the optimization of this seismic horizon reconstruction algorithm. To reduce the effect of the random access required by the propagator, which is very time consuming, we propose an intelligent bricked cache system fully adapted to the algorithm. Then, we modify the original version of the algorithm with a new propagation method in order to minimize the memory usage : the recursive propagation. Then we extend the propagation algorithm to nD data (multi-offsets acquisitions, multiazimuth. . . ) increasingly used for seismic interpretation. Two extensions are then proposed : a "free" propagation and a "constrained" propagation. As the new dimensions increase the seismic data size, we adapt the optimum cache system for each propagation extension in order to improve the performance for both propagations. Finally, we present and comment results of nD propagation obtained on real data
Ould, Dellahy Maloum Isselmou. "Reconstruction de surfaces gauches à partir de données non structurées : paramétrisation par des transformations conformes." Lyon 1, 1995. http://www.theses.fr/1995LYO10233.
Full textDemantke, Jérôme. "Reconstruction de modèles 3D photoréalistes de façades à partir de données image et laser terrestre." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1015/document.
Full textOne wishes to detect and model building façades from data acquired by the ign mobile scanning vehicle, the Stereopolis. It is a question of finding a geometric representation of facades appropriate to the data (lidar/laser signal and optical images).The method should be automatic and enable the modeling of a large number of facade to help the production of digital city models. Technical obstacles come from the mobile acquisition in uncontrolled urban environment (vehicle georeferencing, variable lidar point density...), they come from the lidar signal, retrieved from a relatively new technology for which the process is not yet consensus :does one operates into sensor geometry or not ? Finally, the amount of data raises the problem of scaling up. To analyze the geometry of lidar 3D point clouds, we proposed attributes describing for each point the shape of the local surroundings (linear-1D, planar-2D or volume-3D).The facade main planes are automatically extracted from lidar data through a streamed detection algorithm of vertical rectangles. We developed two models that are initialized by these rectangles. An irregular grid in which each cell, parallel to the main plane can move forward or backward. A deformable grid which is ''pushed by the laser beams toward the laser points''. Finally, we showed how the deformable grid can be made consistent with the optical images aligning the geometric discontinuities of the grid with radiometric discontinuities of the images
Chandramouli, Pranav. "Turbulent complex flows reconstruction via data assimilation in large eddy models." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S035/document.
Full textData assimilation as a tool for fluid mechanics has grown exponentially over the last few decades. The ability to combine accurate but partial measurements with a complete dynamical model is invaluable and has numerous applications to fields ranging from aerodynamics, geophysics, and internal ventilation. However, its utility remains limited due to the restrictive requirements for performing data assimilation in the form of computing power, memory, and prior information. This thesis attempts at redressing various limitations of the assimilation procedure in order to facilitate its wider use in fluid mechanics. A major roadblock for data assimilation is the computational cost which is restrictive for all but the simplest of flows. Following along the lines of Joseph Smagorinsky, turbulence modelling through large-eddy simulation is incorporated in to the assimilation procedure to significantly reduce computing power and time required. The requirement for prior volumetric information for assimilation is tackled using a novel reconstruction methodology developed and assessed in this thesis. The snapshot optimisation algorithm reconstructs 3D fields from 2D cross- planar observations by exploiting directional homogeneity. The method and its variants work well with synthetic and experimental data-sets providing accurate reconstructions. The reconstruction methodology also provides the means to estimate the background covariance matrix which is essential for an efficient assimilation algorithm. All the ingredients are combined to perform variational data assimilation of a turbulent wake flow around a cylinder successfully at a transitional Reynolds number. The assimilation algorithm is validated with synthetic volumetric observation and assessed on 2D cross-planar observations emulating experimental data
Mora, Benjamin. "Nouveaux algorithmes interatifs pour la visualisation de données volumiques." Toulouse 3, 2001. http://www.theses.fr/2001TOU30192.
Full textSerres, Barthélemy. "Acquisition, visualisation et reconstruction 3D de données anatomiques issues de dissection : application aux fibres blanches cérébrales." Phd thesis, Université François Rabelais - Tours, 2013. http://tel.archives-ouvertes.fr/tel-00881437.
Full textSerres, Barthélémy. "Acquisition, visualisation et reconstruction 3D de données anatomiques issues de dissection : application aux fibres blanches cérébrales." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4002/document.
Full textIn this thesis, we present a system to keep track of a destructive process such as a medical specimen dissection, from data acquisition to interactive and immersive visualization, in order to build ground truth models. Acquisition is a two-step process, first involving a 3D laser scanner to get a 3D surface, and then a high resolution camera for capturing the texture. This acquisition process is repeated at each step of the dissection, depending on the expected accuracy and the specific objects to be studied. Thanks to fiducial markers, surfaces are registered on each others. Experts can then explore data using interaction hardware in an immersive 3D visualization. An interactive labeling tool is provided to the anatomist, in order to identify regions of interest on each acquired surface. 3D objects can then be reconstructed according to the selected surfaces. We aim to produce ground truths which for instance can be used to validate data acquired with MRI. The system is applied to the specific case of white fibers reconstruction in the human brain
Gomari-Buanani, Naufal. "Identification harmonique et reconstruction de fonctions dans les classes de Hardy à partir de données partielles." Lyon 1, 2000. http://www.theses.fr/2000LYO10119.
Full text
Sourimant, Gaël. "Reconstruction de scènes urbaines à l'aide de fusion de données de type GPS, SIG et Vidéo." Rennes 1, 2007. http://www.theses.fr/2007REN1S127.
Full textThis thesis presents a new scheme for 3D buildings reconstruction, using GPS, GIS and Video datasets. The goal is to refine simple and geo-referenced 3D models of buildings, extracted from a GIS database (Geographic Information System). This refinement is performed thanks to a registration between these models and the video. The GPS provides rough information about the camera location. First, the registration between the video and the 3D models using robust virtual visual servoing is presented. The aim is to find, for each image of the video, the geo-referenced pose of the camera (position and orientation), such as the rendered 3D models projects exactly on the building images in the video. Next, textures of visible buildings are extracted from the video images. A new algorithm for façade texture fusion based on statistical analysis of the texels color is presented. It allows to remove from the final textures all occluding objects in front of the viewed building façades. Finally, a preliminary study on façades geometric details extraction is presented. Knowing the pose of the camera for each image of the video, a disparity computation using either graph-cuts or optical flow is performed in texture space. The micro-structures of the viewed façades can then be recovered using these disparity maps
Belaoucha, Brahim. "Utilisation de l’IRM de diffusion pour la reconstruction de réseaux d’activations cérébrales à partir de données MEG/EEG." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4027/document.
Full textUnderstanding how brain regions interact to perform a given task is a very challenging task. Electroencephalography (EEG) and Magnetoencephalography (MEG) are two non-invasive functional imaging modalities used to record brain activity with high temporal resolution. As estimating brain activity from these measurements is an ill-posed problem. We thus must set a prior on the sources to obtain a unique solution. It has been shown in previous studies that structural homogeneity of brain regions reflect their functional homogeneity. One of the main goals of this work is to use this structural information to define priors to constrain more anatomically the MEG/EEG source reconstruction problem. This structural information is obtained using diffusion magnetic resonance imaging (dMRI), which is, as of today, the unique non-invasive structural imaging modality that provides an insight on the structural organization of white matter. This makes its use to constrain the EEG/MEG inverse problem justified. In our work, dMRI information is used to reconstruct brain activation in two ways: (1) In a spatial method which uses brain parcels to constrain the sources activity. These parcels are obtained by our whole brain parcellation algorithm which computes cortical regions with the most structural homogeneity with respect to a similarity measure. (2) In a spatio-temporal method that makes use of the anatomical connections computed from dMRI to constrain the sources’ dynamics. These different methods are validated using synthetic and real data
Franceschini, Lucas. "Modeling Strategies for Aerodynamic Flow Reconstruction from partial measurements." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX092.
Full textIn a first moment we will be interested in the recovery of the mean-flow quantities from partial or sparse information, ranging from point-wise velocity probes to wall-pressure and friction. This will be achieved by considering the Reynolds-Averaged Navier-Stokes (RANS) equations, completed with a model, here the Spalart-Allmaras. This kind of modeling has been conceived for a few benchmark flow configurations and may lack generality, leading to erroneous predictions, especially when re-circulation is present. We propose the modification of this model with a tuning parameter such that its solution matches the best the aforementioned mean-flow data. The configuration considered was a Backward-Facing Step at Re=28275, with actual data stemming from a DNS.Then, we turn our attention to linear mean-flow analysis and its use to predict the nonlinear unsteady fluctuation. In particular, we design a reduced-order model, composed by the mean-flow equation coupled with the resolvent modes, predicting the fluctuation for each existing frequency. The energies of those modes are used as tuning parameters for the data-assimilation procedure, that will take as input typically (very) few point-wise time-resolved information. This technique will be applied in transitional flows such as the one around a squared-section cylinder, a benchmark case for oscillator flows, and a backward-facing-step, a typical noise-amplifier flow.We then consider a turbulent case corresponding to the flow around a squared-section cylinder at Re=22000, having both oscillator (periodic vortex-shedding) and noise-amplifier-like characteristics (represented by the Kelvin-Helmholtz structures). Classical mean-flow stability analysis is used to recover the the vortex-shedding mode and a resolvent technique, based on the linearized equations around the periodic component, is used to recover the dependency of the Kelvin-Helmholtz modes with the vortex-shedding
Alevizos, Panagiotis. "Arrangements de rayons : application à la reconstruction de formes planes." Paris 11, 1988. http://www.theses.fr/1988PA112231.
Full textThis thesis presents a research in Computational Geometry and in Robotics, concerning the arrangements of rays in the plane. A ray is a semi-infinite curve which represents, for example, a light ray intended to measure a point on the boundary of an object (the ray terminus). First we show that we can define a total order on the set P of the measured points (the rays termini), if and only if these points belong to a unique and simply connected object. Then we develop algorithms which permit to calculate in optimal time, either the total order on the set P (in the case where the points of P belong to a unique object), or the k partial sub-orders (in the case where the points of P belong to k objects). The division of the points of P in k ordered sub-sets, allows to link the adjacent points of each sub-set by edges and thus to construct a polygonal approximation of the boundary of each object. Moreover, we can construct in optimal time anyone of the cells of the rays' arrangement. This allows, in particular, to compute the k smallest regions which, for sure, contain the k objects. We show also how the order relation can be used to determine the exact shape of a polygonal object by means of a minimum number of probes. We thus generalize the technique of probing. Besides the results on rays' arrangements, we present a combinatorial result on the maximum number of tetrahedra in 3-dimensional triangulations
Filliard, Clement. "Etude de la calibration et de la reconstruction des cartes du ciel avec les données Planck-HFI." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112126.
Full textJaillet, Fabrice. "Contribution à la reconstruction et à l'animation d'objets déformables définis à partir de données structurées en sections." Lyon 1, 1999. http://www.theses.fr/1999LYO10058.
Full textAvanthey, Loïca. "Acquisition et reconstruction de données 3D denses sous-marines en eau peu profonde par des robots d'exploration." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0055/document.
Full textOur planet is mostly covered by seas and oceans. However, our knowledge of the seabed is far more restricted than that of land surface. In this thesis, we seek to develop a system dedicated to precise thematic mapping to obtain a dense point cloud of an underwater area on demand by using three-dimensional reconstruction. The complex nature of this type of system leads us to favor a multidisciplinary approach. We will examine in particular the issues raised by studying small shallow water areas on the scale of individual objects. The first problems concern the effective in situ acquisition of stereo pairs with logistics adapted to the sizes of the observed areas: for this, we propose an agile, affordable microsystem which is sufficiently automated to provide reproducible and comparable data. The second set of problems relates to the reliable extraction of three-dimensional information from the acquired data: we outline the algorithms we have developed to take into account the particular characteristics of the aquatic environment (such as its dynamics or its light absorption). We therefore discuss in detail the issues encountered in the underwater environment concerning dense matching, calibration, in situ acquisition, data registration and redundancy
Michot, Julien. "Recherche linéaire et fusion de données par ajustement de faisceaux : application à la localisation par vision." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00626489.
Full textBoubertakh, Rédha. "Synthèse de Fourier régularisée : Cas des données incomplètes et application à l'IRM cardiaque rapide." Paris 11, 2002. http://www.theses.fr/2002PA112269.
Full textWe address the problem of image reconstruction in magnetic resonance imaging (MRI) from fast acquisitions that sample a small number of data along non Cartesian trajectories. These acquisitions are of great interest, specifically in dynamic cardiovascular imaging. Because of the intrinsic properties of the MRI, a trade-off must be made between the reduction of the acquisition time, the preservation of the spatial resolution and he signal to noise ratio. To improve this tradeoff, we propose a modelling of the acquisition process valid for any sampling trajectory in the image Fourier plane. This model takes into account the exact localisations of the data and does not carry out any direct estimation of the missing data. The reconstruction is then a Fourier synthesis problem and is carried out by an iterative minimization of a convex criterion composed of a fidelity term to the data and of a regularization term. An important part of this work consisted in optimizing the calculation of the fidelity term in order to speed-up the reconstruction. The regularization term incorporates prior information resuming some characteristics of the cardiovascular images which allow to compensate the lack in data information. We choose to introduce spatial smoothness constraints to decrease the noise while preserving the discontinuities between different areas. The developed method was evaluated on numerical simulations and real spiral acquisitions of a phantom and of the heart and compared to a reference reconstruction. The obtained results allowed to demonstrate the validity of the approach to inverse the aliasing artefacts due to data undersampling, to improve the signal to noise ratio and to preserve the spatial resolution of the images. These results show that the reduction of the acquisition time is possible without a notable degradation of the quality of the reconstructed images
Daul, Christian. "Segmentation, recalage et reconstruction 3D de données.Traitement d'images médicales et industrielles." Habilitation à diriger des recherches, Institut National Polytechnique de Lorraine - INPL, 2008. http://tel.archives-ouvertes.fr/tel-00326078.
Full textRobinson, Cordelia. "Image data assimilation with fluid dynamics models : application to 3D flow reconstruction." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S128/document.
Full textIn the one hand, flow dynamics are usually described by the NavierStokes equations and the literature provides a wide range of techniques to solve such equations. On the other hand, we can nowadays measure different characteristics of a flow (velocity, pressure, temperature etc...) with non-intrusive Particle Image Velocimetry techniques. Within this thesis, we take interest in the data assimilation techniques, that combine a dynamics model with measurements to determine a better approximation of the system. This thesis focus on the classic variational assimilation technique (4DVar) which ensures a high accuracy of the solution by construction. We carry out a first application of the 4DVar technique to reconstruct the characteristics (height and velocity field) of a uni directional wave at its free surface. The fluid evolution is simulated by the shallow water equations and solved numerically. We use a simple experimental setup envolving a depth sensor (Kinect sensor) to extract the free surface height. We compared the results of the 4DVar reconstruction with different versions of the hybrid data assimilation technique 4DEnVar. Finally, we apply the 4DVar technique to reconstruct the downstream of a three dimensional cylinder wake at Reynolds 300. The turbulent flow is simulated by the high-performance multi-threading DNS code Incompact3d. This dynamics model is first combined with synthetic three dimensional observations, then with real orthogonal-plane stereo PIV observations to reconstruct the full three dimensional flow
Trillon, Adrien. "Reconstruction de défauts à partir de données issues de capteurs à courants de Foucault avec modèle direct différentiel." Phd thesis, Ecole centrale de nantes - ECN, 2010. http://tel.archives-ouvertes.fr/tel-00700739.
Full textDjiguiba, Antimbé Ousmane. "Segmentation et lissage de données mesurées pour la reconstruction de surfaces en CAO : apport des techniques de subdivision." Valenciennes, 2002. https://ged.uphf.fr/nuxeo/site/esupversions/0fd3487f-e76d-44a0-ac21-d0a9461e62fe.
Full textThe topic of surfaces reconstruction from measured data obtained by digitising a real part in CAD is to create accurate and good geometric quality continuous models. That needs segmentation and smoothing processes whose success depends on error measurement processing and data underlying features extraction: extremal curvature, curvature changing. The thesis deals with data pre-processing and features extraction as a continuous geometrical data approximation by using subdivision techniques, which generate continuous geometric models from polygonal meshes. The idea is to assume that discrete data are the result of subdivision at infinity from an initial polygon (curve) or an initial polyhedron (surfaces). An iterative inversion process using subdivision equations and properties allows to get this polygon or polyhedron. This solution is defined in a general framework by using multiresolution techniques in CAD
Carminati, Federico. "Conception, réalisation et exploitation du traitement de données de l’expérience ALICE pour la simulation, la reconstruction et l’analyse." Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=0ed58585-b62e-40b5-8849-710d1e15c6c2.
Full textThe ALICE (A Large Ion Collider Experiment) at the CERN (Conseil Européenne pour la Recherche Nucléaire) LHC (Large Hadron Collider) facility uses an integrated software framework for the design of the experimental apparatus, the evaluation of its performance and the processing of the experimental data. Federico Carminati has designed this framework. It includes the event generators and the algorithms for particle transport describing the details of the interaction particles-matter (designed and implemented by Federico Carminati), the reconstruction of the particle trajectories and the final physics analysis
Rekik, Wafa. "Fusion de données temporelles, ou 2D+t, et spatiales, ou 3D, pour la reconstruction de scènes 3D+t et traitement d'images sphériques : applications à la biologie cellulaire." Paris 6, 2007. http://www.theses.fr/2007PA066655.
Full textPestel, Valentin. "Détection de neutrinos auprès du réacteur BR2 : analyse des premières données de l'expérience SoLid." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC238.
Full textSoLid (Search for Oscillations with a Lithium-6 detector) is a very short baseline (≺10 m) "reactor anti-neutrino" experiment which takes data at the BR2 nuclear reactor (SCK-CEN, Belgium). The first goal is to probe the "reactor" anomaly and to test the hypothesis of the existence of "light" sterile neutrino(s) (Delta m2 ~ 1eV2). Secondarily, the measurement of the neutrino energy spectrum induced by the fission of 235U will provide new constraints. The 1.6 ton detector, segmented into 12800 detection cells of 5x5x5 cm3, is based on a new technology using two scintillators (PVT and ZnS) read by MPPCs. After contextualizing the experiment and describing the detector, this manuscript describes the procedure to characterize the detector during its construction.Than, data reconstruction and events identification will be presented. The following part describes the calibration procedures, focusing on the evaluation of neutron detection efficiency. Finally, a first analysis will validate the understanding of backgrounds and will extract the anti-neutrino signal
Verdie, Yannick. "Modélisation de scènes urbaines à partir de données aériennes." Thesis, Nice, 2013. http://www.theses.fr/2013NICE4078.
Full textAnalysis and 3D reconstruction of urban scenes from physical measurements is a fundamental problem in computer vision and geometry processing. Within the last decades, an important demand arises for automatic methods generating urban scenes representations. This thesis investigates the design of pipelines for solving the complex problem of reconstructing 3D urban elements from either aerial Lidar data or Multi-View Stereo (MVS) meshes. Our approaches generate accurate and compact mesh representations enriched with urban-related semantic labeling.In urban scene reconstruction, two important steps are necessary: an identification of the different elements of the scenes, and a representation of these elements with 3D meshes. Chapter 2 presents two classification methods which yield to a segmentation of the scene into semantic classes of interests. The beneath is twofold. First, this brings awareness of the scene for better understanding. Second, deferent reconstruction strategies are adopted for each type of urban elements. Our idea of inserting both semantical and structural information within urban scenes is discussed and validated through experiments. In Chapter 3, a top-down approach to detect 'Vegetation' elements from Lidar data is proposed using Marked Point Processes and a novel optimization method. In Chapter 4, bottom-up approaches are presented reconstructing 'Building' elements from Lidar data and from MVS meshes. Experiments on complex urban structures illustrate the robustness and scalability of our systems
Boulch, Alexandre. "Reconstruction automatique de maquettes numériques 3D." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1099/document.
Full textThe interest for digital models in the building industry is growing rapidly. These centralize all the information concerning the building and facilitate communication between the players of construction : cost evaluation, physical simulations, virtual presentations, building lifecycle management, site supervision, etc. Although building models now tend to be used for large projects of new constructions, there is no such models for existing building. In particular, old buildings do not enjoy digital 3D model and information whereas they would benefit the most from them, e.g., to plan cost-effective renovation that achieves good thermal performance. Such 3D models are reconstructed from the real building. Lately a number of automatic reconstruction methods have been developed either from laser or photogrammetric data. Lasers are precise and produce dense point clouds. Their price have greatly reduced in the past few years, making them affordable for industries. Photogrammetry, often less precise and failing in uniform regions (e.g. bare walls), is a lot cheaper than the lasers. However most approaches only reconstruct a surface from point clouds, not a semantically rich building model. A building information model is the alliance of a geometry and a semantics for the scene elements. The main objective of this thesis is to define a framework for digital model production regarding both geometry and semantic, using point clouds as an entry. The reconstruction process is divided in four parts, gradually enriching information, from the points to the final digital mockup. First, we define a normal estimator for unstructured point clouds based on a robust Hough transform. It allows to estimate accurate normals, even near sharp edges and corners, and deals with the anisotropy inherent to laser scans. Then, primitives such as planes are extracted from the point cloud. To avoid over-segmentation issues, we develop a general and robust statistical criterion for shape merging. It only requires a distance function from points to shapes. A piecewise-planar surface is then reconstructed. Planes hypothesis for visible and hidden parts of the scene are inserted in a 3D plane arrangement. Cells of the arrangement are labelled full or empty using a new regularization on corner count and edge length. A linear formulation allow us to efficiently solve this labelling problem with a continuous relaxation. Finally, we propose an approach based on constrained attribute grammars for 3D model semantization. This method is entirely bottom-up. We prevent the possible combinatorial explosion by introducing maximal operators and an order on variable instantiation
Pires, Sandrine. "Application des méthodes multi-échelles aux effets de lentille gravitationnelles faibles : reconstruction et analyse des cartes de matière noire." Paris 11, 2008. http://www.theses.fr/2008PA112256.
Full textModern cosmology refers to the physical study of Universe and is based on a cosmological model that deals with the structure of the Universe, its origins and its evolution. Over the last decades, observations have provide evidence for both dark matter dark energy. Universe has been found to be dominated by these two components whose composition remains a mystery. Weak gravitational lensing provides a direct way to probe dark matter can be used to map the dark matter distribution. Furthermore, weak lensing effect is believed to be the more promising tool to understand the nature of dark matter and dark energy and then to constrain the cosmological model. New weak lensing surveys, more and more accurate, are already planned that will cover a large fraction of the sky. But a significant effort should be done to improve the current analyses. In this thesis, in order to improve the weak lensing data processing, we suggest to use new methods of analysis: the multiscale methods, that make it possible to transform a signal in a way that facilitates its analysis. We have been interested in the reconstruction and analysis of the dark matter mass map. First, we have developed a new method to deal with the missing data. Second, we have suggested a new filtering method that makes the dark matter mass map reconstruction better. Last, we have introduced a new statistic to set tighter constraints in the cosmological model
MOUCHE, Fabrice. "Etude structurale de macromolécules biologiques par cryomicroscopie électronique, reconstruction tridimensionnelle et recalage de données de cristallographie aux rayons X." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2001. http://tel.archives-ouvertes.fr/tel-00006224.
Full textVeron, Didier. "Utilisation des FADC pour la reconstruction et l'analyse des données de bruit de fond dans l'expérience neutrino de Chooz." Lyon 1, 1997. http://www.theses.fr/1997LYO10074.
Full textMartin, Matthieu. "Reconstruction 3D de données échographiques du cerveau du prématuré et segmentation des ventricules cérébraux et thalami par apprentissage supervisé." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI118.
Full textAbout 15 million children are born prematurely each year worldwide. These patients are likely to suffer from brain abnormalities that can cause neurodevelopmental disorders: cerebral palsy, deafness, blindness, intellectual development delay, … Studies have shown that the volume of brain structures is a good indicator which enables to reduce and predict these risks in order to guide patients through appropriate care pathways during childhood. This thesis aims to show that 3D ultrasound could be an alternative to MRI that would enable to quantify the volume of brain structures in all premature infants. This work focuses more particularly on the segmentation of the lateral ventricles (VL) and thalami. Its four main contributions are: the development of an algorithm which enables to create 3D ultrasound data from 2D transfontanellar ultrasound of the premature brain, the segmentation of thigh quality he lateral ventricles and thalami in clinical time and the learning by a convolutional neural networks (CNN) of the anatomical position of the lateral ventricles. In addition, we have created several annotated databases in partnership with the CH of Avignon. Our reconstruction algorithm was used to reconstruct 25 high-quality ultrasound volumes. It was validated in-vivo where an accuracy 0.69 ± 0.14 mm was obtained on the corpus callosum. The best segmentation results were obtained with the V-net, a 3D CNN, which segmented the CVS and the thalami with respective Dice of 0.828± 0.044 and 0.891±0.016 in a few seconds. Learning the anatomical position of the CVS was achieved by integrating a CPPN (Compositional Pattern Producing Network) into the CNNs. It significantly improved the accuracy of CNNs when they had few layers. For example, in the case of the 7-layer V-net network, the Dice has increased from 0.524± 0.076 to 0.724±0.107. This thesis shows that it is possible to automatically segment brain structures of the premature infant into 3D ultrasound data with precision and in a clinical time. This proves that high quality 3D ultrasound could be used in clinical routine to quantify the volume of brain structures and paves the way for studies to evaluate its benefit to patients
El, hayek Nadim. "Contribution à la reconstruction de surfaces complexes à partir d'un grand flot de données non organisées pour la métrologie 3D." Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0055/document.
Full textComplex surfaces exhibit real challenges in regard to their design specification, their manufacturing, their measurement and the evaluation of their manufacturing defects. They are classified according to their geometric/shape complexity as well as to their required tolerance. Thus, the manufacturing and measurement processes used are selected accordingly. In order to transcribe significant information from the measured data, a data processing scheme is essential. Here, processing involves surface reconstruction in the aim of reconstituting the underlying geometry and topology to the points and extracting the necessary metrological information (form and/or dimensional errors). For the category of aspherical surfaces, where a mathematical model is available, the processing of the data, which are not necessarily organized, is done by fitting/associating the aspherical model to the data. The sought precision in optics is typically nanometric. In this context, we propose the L-BFGS optimization algorithm, first time used in metrological applications and which allows solving unconstrained, non-linear optimization problems precisely, automatically and fast. The L-BFGS method remains efficient and performs well even in the presence of very large amounts of data.In the category of general freeform surfaces and particularly turbine blades, the manufacturing, measurement and data processing are all at a different scale and require sub-micrometric precision. Freeform surfaces are generally not defined by a mathematical formula but are rather represented using parametric models such as B-Splines and NURBS. We expose a detailed state-of-the-art review of existing reconstruction algorithms in this field and then propose a new active contour deformation of B-Splines approach. The algorithm is independent of problems related to initialization and initial parameterization. Consequently, it is a new algorithm with promising results. We then establish a thorough study and a series of tests to show the advantages and limitations of our approach on examples of closed curves in the plane. We conclude the study with perspectives regarding improvements of the method and its extension to surfaces in 3D