Dissertations / Theses on the topic 'Techniques de traitement d’image'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Techniques de traitement d’image.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bigot-Marchand, Stéphanie. "Outils de traitement d’images adaptés au traitement d’images omnidirectionnelles." Amiens, 2008. http://www.theses.fr/2008AMIE0128.
Full textIn this thesis, we develop processing tools adapted to omnidirectional images thanks to "the sphere of equivalence". Indeed, applying classical image processing methods (that is to say, methods adapted to planar images) to omnidirectional images will provide errors because it doesn't take account the specific geometry of these images. The approach we propose will provide us efficient methods, regardless we are in the center or the periphery of the image. In the first part, we recall what is the omnidirectional vision and a catadioptric sensor. We then justify the existence of "the sphere of equivalence". In the second part, we present several mathematical tools (spherical harmonics, spherical convolution. . . ) useful for the development of our spherical methods. We propose then to construct edge and smoothing operators to spherical images. We have tested these different methods in order to determine their advantages for omnidirectional image low-level processing in comparison with "classical methods". These tests highlight the advantage of the "spherical methods", which provide uniform processing on the image
Moghrani, Madjid. "Segmentation coopérative et adaptative d’images multicomposantes : application aux images CASI." Rennes 1, 2007. http://www.theses.fr/2007REN1S156.
Full textThis thesis focuses on cooperative approaches in image segmentation. Two adaptive systems were implemented; the first is parallel and second is sequential. The parallel system is based on competing methods of segmentation by classification. The sequential system runs these methods according to a predefined schedule. The extraction of features for segmentation is performed according of the region’s nature (uniform or textured). Both systems are composed of three main modules. The first module aims to detect the region’s nature of the image (uniforms or textured) in order to adapt further processings. The second module is dedicated to the segmentation of detected regions according to their nature. The segmentation results are assessed and validated at different levels of the segmentation process. The third module merges intermediate results obtained on the two types of areas. Both systems are tested and compared on synthetic and real mono- and multi-component images issued from aerial remote sensing
Emam, Mohammed. "Prédiction des facteurs de risque conduisant à l’emphysème chez l’homme par utilisation de techniques diagnostiques." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112081/document.
Full textChronic Obstructive Pulmonary Disease (COPD) refers to a group of lung diseases that block airflow and make it increasingly difficult for you to breathe. Emphysema and chronic bronchitis are the two main conditions that make up COPD, but COPD can also refer to damage caused by chronic asthmatic bronchitis. Pulmonary emphysema is defined as a lung disease characterized by “abnormal enlargement of the air spaces distal to the terminal, non-respiratory bronchiole, accompanied by destructive changes of the alveolar walls”. These lung parenchymal changes are pathognomonic for emphysema. Chronic bronchitis is a form of bronchitis characterized by excess production of sputum leading to a chronic cough and obstruction of air flow. In all cases, damage to your airways eventually interferes with the exchange of oxygen and carbon dioxide in your lungs. Habitual techniques of emphysema’s diagnosis are based on indirect features, such as clinical examination; Pulmonary Function Tests (PFT) and subjective visual evaluation of CT scans. These tests are of limited value in assessing mild to moderate emphysema. The presented work discusses the possibility of applying a nonlinear analysis approach on air density distribution within lung airways tree at any level of branching. Computed Tomography (CT) source images of the lung are subjected to two phases of treatment in order to produce a fractal coefficient of the air density distribution. In the first phase, raw pixel values from source images, corresponding to all possible air densities, are processed by a software tool, developed in order to, construct a product image. This is done through Cascading Elimination of Unwanted Elements (CEUE): a preprocessing analysis step of the source image. It identifies values of air density within the airways tree, while eliminating all non-air-density values. Then, during the second phase, in an iterative manner, a process of Resolution Diminution Iterations (RDI) takes place. Every resolution reduction produces a new resultant histogram. A resultant histogram is composed of a number of peaks, each of which corresponding to a cluster of air densities. A curve is plotted for each resolution reduction versus the number of peaks counted at this particular resolution. It permits the calculation of the fractal dimension from the regression slope of log-log power law plot
Mairesse, Fabrice. "Contrôle dimensionnel de panneaux de particules de grandes dimensions par traitement d’images." Dijon, 2007. http://www.theses.fr/2007DIJOS074.
Full textThis thesis deals with the dimensional control, in industrial conditions, of large manufactured particleboards. Two principal problematics were broached: the mosaicking due to acquisition conditions and the measure of distorted circular forms. For the first problematic, a solution based on interest points search by hessian matrix combined with a local characterization by Census transform was revealed as an efficient method. The second problematic, due to the crumbly nature of the material and obstructions by the coating, is the measure of imperfect edge drillings. In order to compensate distortions, a multi-scale approach by active contours was developed. This one gives, in one pass, a set of approximations of the initial outline around a global scale factor. More regularized outlines are then generated more or less close to the original form following a second scale parameter. The estimator of obtained circular forms characteristics is done with a new estimator based on the Radon transform. Circle's tangents allow to find the center and the radius. The method is based on an accumulation principle in discrete geometry and discretized parametric fitting, giving a subpixel precision. This new approach is more accurate than classical estimators in the framework of distorted circles
Rodrigues, José Marconi. "Transfert sécurisé d’images par combinaison de techniques de compression, cryptage et marquage." Montpellier 2, 2006. http://www.theses.fr/2006MON20085.
Full textHachicha, Walid. "Traitement, codage et évaluation de la qualité d’images stéréoscopiques." Thesis, Paris 13, 2014. http://www.theses.fr/2014PA132037.
Full textRecent developments in 3D stereoscopic technology have opened new horizons in many application fields such as 3DTV, 3D cinema, video games and videoconferencing and at the same time raised a number of challenges related to the processing and coding of 3D data. Today, stereoscopic imaging technology is becoming widely used in many fields. There are still some problems related to the physical limitations of image acquisition systems, e.g. transmission and storage requirements. The objective of this thesis is the development of methods for improving the main steps of stereoscopic imaging pipeline such as enhancement, coding and quality assessment. The first part of this work addresses quality issues including contrast enhancement and quality assessment of stereoscopic images. Three algorithms have been proposed. The first algorithm deals with the contrast enhancement aiming at promoting the local contrast guided by calculated/estimated object importance map in the visual scene. The second and the third algorithms aim at predicting the distortion severity of stereo images. In the second one, we have proposed a fullreference metric that requires the reference image and is based on some 2D and 3D findings such as amplitude non-linearity, contrast sensitivity, frequency and directional selectivity, and binocular just noticeable difference model. While in the third algorithm, we have proposed a no-reference metric which needs only the stereo pair to predict its quality. The latter is based on Natural Scene statistics to identify the distortion affecting the stereo image. The statistic 3D features consist in combining features extracted from the natural stereo pair and those from the estimate disparity map. To this end, a joint wavelet transform, inspired from the vector lifting concept is first employed. Then, the features are extracted from the obtained subbands. The second part of this dissertation addresses stereoscopic image compression issues. We started by investigating a one-dimensional directional discrete cosine transform to encode the disparity compensated residual image. Afterwards, and based on the wavelet transform, we investigated two techniques for optimizing the computation of the residual image. Finally, we present efficient bit allocation methods for stereo image coding purpose. Generally, the bit allocation problem is solved in an empirical manner by looking for the optimal rates leading to the minimum distortion value. Thanks to recently published work on approximations of the entropy and distortion functions, we proposed accurate and fast bit allocation schemes appropriate for the open-loop and closed-loop based stereo coding structures
Takam, tchendjou Ghislain. "Contrôle des performances et conciliation d’erreurs dans les décodeurs d’image." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT107/document.
Full textThis thesis deals with the development and implementation of error detection and correction algorithms in images, in order to control the quality of produced images at the output of digital decoders. To achieve the objectives of this work, we first study the state-of the-art of the existing approaches. Examination of classically used approaches justified the study of a set of objective methods for evaluating the visual quality of images, based on machine learning methods. These algorithms take as inputs a set of characteristics or metrics extracted from the images. Depending on the characteristics extracted from the images, and the availability or not of a reference image, two kinds of objective evaluation methods have been developed: the first based on full reference metrics, and the second based on no-reference metrics; both of them with non-specific distortions. In addition to these objective evaluation methods, a method of evaluating and improving the quality of the images based on the detection and correction of the defective pixels in the images has been implemented. The proposed results have contributed to refining visual image quality assessment methods as well as the construction of objective algorithms for detecting and correcting defective pixels compared to the various currently used methods. An implementation on an FPGA has been carried out to integrate the models with the best performances during the simulation phase
Baldacci, Fabien. "Graphe de surface orientée : un modèle opérationnel de segmentation d'image 3D." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13940/document.
Full textIn this work we focus on 3D image segmentation. The aim consists in defining a framework which, given a segmentation problem, allows to design efficiently an algorithm solving this problem. Since this framework has to be unspecific according to the kind of segmentation problem, it has to allow an efficient implementation of most segmentation techniques and criteria, in order to combine them to define new algorithms. This framework has to rely on a structuring model both representing the topology and the geometry of the partition of an image, in order to efficiently extract required information. In this document, different segmentation techniques are presented in order to define a set of primitives required for their implementation. Existing models are presented with their advantages and drawbacks, then the new structuring model is defined. Its whole implementation including details of its memory consumption and time complexity for each primitives of the previously defined set of requirements is given. Some examples of use with real image analysis problems are described, with also possible extensions of the model and its implementation on parallel architecture
Petit, Cécile. "Analyse d’images macroscopiques appliquée à l’injection directe Diesel." Saint-Etienne, 2006. http://www.theses.fr/2006STET4005.
Full textDue to emission standards, car manufacturers have to improve combustion. It can be achieved studying Diesel direct injection, particularly fuel atomization as this one is responsible for the mixture quality. The Diesel macroscopic spray is investigated using image processing. An image reference point is first calculated: the virtual spray origin VSO, deduced from the elongated spray plumes primary inertia axes and from a Voronoï diagram. These plumes are analyzed calculating their penetration, angle and barycenter. Afterwards, the line deduced from the spray plume boundary, passing by the virtual injection center, is evaluated. This axis is the reference for the internal symmetry, set in terms of correlation, distances: absolute, Euclidian, infinite and logarithmic which is based on the Logarithmic Image Processing model. This last distance enables to compare sprays acquired in different conditions (light source, ambient medium), it is the liquid continuous core internal symmetry. Then the line deduced from the plume grey levels, forced to pass by the VSO, with the distance to the VSO as additional weight, is calculated. This axis is the basis of the external symmetry, established in terms of correlation, distances: absolute, Euclidian, infinite and Hausdorff. Finally, a spray image can be evaluated using one parameter as the Asplünd distance, circularities, or barycenter. Then penetration and angle populations study show their correlation, variation part to part and plume to plume, non Gaussian distributions. Afterwards, injectors are compared using the image processing parameters. Finally, the data tendencies study show how promising the image processing parameters are
Kennel, Pol. "Caractérisation de texture par analyse en ondelettes complexes pour la segmentation d’image : applications en télédétection et en écologie forestière." Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20215/document.
Full textThe analysis of digital images, albeit widely researched, continues to present a real challenge today. In the case of several applications which aim to produce an appropriate description and semantic recognition of image content, particular attention is required to be given to image analysis. In response to such requirements, image content analysis is carried out automatically with the help of computational methods that tend towards the domains of mathematics, statistics and physics. The use of image segmentation methods is a relevant and recognized way to represent objects observed in images. Coupled with classification, segmentation allows a semantic segregation of these objects. However, existing methods cannot be considered to be generic, and despite having been inspired by various domains (military, medical, satellite etc), they are continuously subject to reevaluation, adaptation or improvement. For example satellite images stand out in the image domain in terms of the specificity of their mode of acquisition, their format, or the object of observation (the Earth, in this case).The aim of the present thesis is to explore, by exploiting the notion of texture, methods of digital image characterization and supervised segmentation. Land, observed from space at different scales and resolutions, could be perceived as being textured. Land-use maps could be obtained through the segmentation of satellite images, in particular through the use of textural information. We propose to develop competitive algorithms of segmentation to characterize texture, using multi-scale representations of images obtained by wavelet decomposition and supervised classifiers such as Support Vector Machines.Given this context, the present thesis is principally articulated around various research projects which require the study of images at different scales and resolutions, and which are varying in nature (eg. multi-spectral, optic, LiDAR). Certain aspects of the methodology developed are applied to the different case studies undertaken
Valet, Lionel. "Un système flou de fusion coopérative : application au traitement d’images naturelles." Chambéry, 2001. http://www.theses.fr/2001CHAMS022.
Full textMadec, Morgan. "Conception, simulation et réalisation d’un processeur optoélectronique pour la reconstruction d’images médicales." Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/MADEC_Morgan_2006.pdf.
Full textOptical processing can be used to speed up some algorithms of image reconstruction from tomodensitometric data provided by volume exploration systems. This may be of high interest in order to meet the needs of future assisted therapy systems. Two systems are described in this document, corresponding to the two main steps of the above mentioned algorithms: a filtering processor and a backprojection processor. They are first considered under a material point of view. Whatever function it may compute, an optical processor is made up of light sources, displays and cameras. Present state-of-the-art devices highlight a weakness in display performances. Special attention has been focused on ferroelectric liquid crystal spatial light modulators (modelling, simulations, and characterizations of commercial solutions). The potential of optical architectures is compared with electronic solutions, considering computation power and processed image quality. This study has been carried out for both systems first in simulation, with a reliable model of the architecture, and then with an experimental prototype. The optical filtering processor does not give accurate results: the signal to noise ratio on the reconstructed image is about 20 dB in simulation (the model used does not take into account the majority of geometrical distortions) and experimental measurements show strong limitation, especially when considering the problem of image formation with coherent lighting (speckle). On the other hand, results obtained with the optical backprojection processor are most encouraging. The model, more complete and accurate than the filtering processor, as well as the simulations, shows that processed image quality can be virtually equivalent to the one obtained by digital means (signal to noise ratio is over 50 dB) with two order of magnitude speed-up. Results obtained with the experimental prototype are in accordance with simulations and confirm the potential held by the architecture. As an extension, a hybrid processor involving the backprojection processor for the computation of more complex reconstruction algorithms, e. G. ASSR for helical CT-scan, is proposed in the last part of the document
Begaint, Jean. "Towards novel inter-prediction methods for image and video compression." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S038/document.
Full textDue to the large availability of video cameras and new social media practices, as well as the emergence of cloud services, images and videos constitute today a significant amount of the total data that is transmitted over the internet. Video streaming applications account for more than 70% of the world internet bandwidth. Whereas billions of images are already stored in the cloud and millions are uploaded every day. The ever growing streaming and storage requirements of these media require the constant improvements of image and video coding tools. This thesis aims at exploring novel approaches for improving current inter-prediction methods. Such methods leverage redundancies between similar frames, and were originally developed in the context of video compression. In a first approach, novel global and local inter-prediction tools are associated to improve the efficiency of image sets compression schemes based on video codecs. By leveraging a global geometric and photometric compensation with a locally linear prediction, significant improvements can be obtained. A second approach is then proposed which introduces a region-based inter-prediction scheme. The proposed method is able to improve the coding performances compared to existing solutions by estimating and compensating geometric and photometric distortions on a semi-local level. This approach is then adapted and validated in the context of video compression. Bit-rate improvements are obtained, especially for sequences displaying complex real-world motions such as zooms and rotations. The last part of the thesis focuses on deep learning approaches for inter-prediction. Deep neural networks have shown striking results for a large number of computer vision tasks over the last years. Deep learning based methods proposed for frame interpolation applications are studied here in the context of video compression. Coding performance improvements over traditional motion estimation and compensation methods highlight the potential of these deep architectures
Hennequin, Christophe. "Etude et réalisation d’un calculateur temps réel embarqué pour la détection de petits objets dans des séquences d’images multi-échelles." Dijon, 2008. http://www.theses.fr/2008DIJOS015.
Full textThis doctoral thesis is part of a research project of the French-German Research Institute of Saint-Louis (ISL) which has been set up to equip artillery projectiles with an on-board image acquisition and processing system. The study focused on real-time target detection in aerial image sequences, considering the imposed restrictions of low quality images, reduced target size and variable acquisition altitude. In view of the unsatisfactory efficiency of the reference algorithms to rapidly detect small objects in our image sequences, an advanced detection algorithm combining statistical methods with morphological filtering has been developed. After analysing in detail the detector’s behaviour and validating its performance, an algorithm/architecture adequacy approach is used for implementing a compatible real-time processing for embedded systems. Finally, the design of a specific and highly parallel architecture allowed to realize a prototype calculator with a programmable component
Bossu, Jérémie. "Segmentation d’images pour la localisation d’adventices : application à la réalisation d’un système de vision pour une pulvérisation spécifique en temps réel." Dijon, 2007. http://www.theses.fr/2007DIJOS079.
Full textAbergel, Rémy. "Quelques modèles mathématiques et algorithmes rapides pour le traitement d’images." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB051/document.
Full textIn this thesis, we focus on several mathematical models dedicated to low-level digital image processing tasks. Mathematics can be used to design innovative models and to provide some rigorous studies of properties of the produced images. However, those models sometimes involve some intensive algorithms with high computational complexity. We take a special care in developing fast algorithms from the considered mathematical models. First, we give a concise description of some fundamental results of convex analysis based on Legendre-Fenchel duality. Those mathematical tools are particularly efficient to perform the minimization of convex and nonsmooth energies, such as those involving the total variation functional which is used in many image processing applications. Then, we focus on a Fourier-based discretization scheme of the total variation, called Shannon total variation, which provides a subpixellic control of the image regularity. In particular, we show that, contrary to the classically used discretization schemes of the total variation based on finite differences, the use of the Shannon total variation yields images that can be easily interpolated. We also show that this model provides some improvements in terms of isotropy and grid invariance, and propose a new restoration model which transforms an image into a very similar one that can be easily interpolated. Next, we propose an adaptation of the TV-ICE (Total Variation Iterated Conditional Expectations) model, recently proposed by Louchet and Moisan in 2014, to address the restoration of images corrupted by a Poisson noise. We derive an explicit form of the recursion operator involved by this scheme, and show linear convergence of the algorithm, as well as the absence of staircasing effect for the produced images. We also show that this variant involves the numerical evaluation of a generalized incomplete gamma function which must be carefully handled due to the numerical errors inherent to the finite precision floating-point calculus. Then, we propose an fast algorithm dedicated to the evaluation of this generalized 4 incomplete gamma function, and show that the accuracy achieved by the proposed procedure is near optimal for a large range of parameters. Lastly, we focus on the astre (A contrario Smooth TRajectory Extraction) algorithm, proposed by Primet and Moisan in 2011 to perform trajectory detection from a noisy point set sequence. We propose a variant of this algorithm, called cutastre, which manages to break the quadratic complexity of astre with respect to the number of frames of the sequence, while showing similar (and even slightly better) detection performances and preserving some interesting theoretical properties of the original astre algorithm
Coudray, Nicolas. "Techniques de segmentation d’images et stratégie de pilotage pour l’analyse automatique d’échantillons en microscopie électronique : Application à la cristallisation 2d." Mulhouse, 2008. https://www.learning-center.uha.fr/opac/resource/techniques-de-segmentation-dimages-et-strategie-de-pilotage-pour-lanalyse-automatique-dechantillons-/BUS4111251.
Full textNew segmentation techniques are elaborated in this thesis to control a transmission electron microscope and to characterize 2D crystals of membrane proteins. A strategy based on image analysis has been developed to drive the micrograph acquisition process. It is organized in three steps during which the microscope is progressively directed to the regions of interest (ROI). Adapted tools have been developed to select those ROI at low and medium magnification. At high magnification, the crystallinity of the selected regions is analyzed. Images of membranes have a poor contrast and are very noisy. Our main segmentation algorithm proposes a multi-resolution gradient analysis, combined with a scale-adapted threshold. The edge information thresholded at different scales is gathered to build the Reconstructed Gradient-Like image. The watershed algorithm is then applied to partition this image into meaningful regions. A new tool is also introduced to threshold gradient images, based on a piecewise linear regression of the descending slope of the unimodal histogram. This method is robust to statistical variations of the histograms. The automatic analysis strategy has been validated with an in situ implementation of a prototype. Tests underline the potential of this work and of automatic image analysis for the best possible characterization and classification of the membranes
Pons, Bernad Gemma. "Aide à l’interprétation d’images radar à ouverture synthétique : analyse conjointe des propriétés géométriques et radiométriques des images SAR." Aix-Marseille 3, 2008. http://www.theses.fr/2008AIX30013.
Full textThe work of this thesis is part of the research efforts that are currently being undertaken on segmentation and classification to ease radar images interpretation. Our thesis contributes to this research by proposing a semi-automatic scene analysis approach to assist the interpretation of images acquired by a synthetic aperture radar (SAR). Mainly, it is focused on the application of segmentation methods to classification and object recognition problems. Its aim is to propose fast and simple methods, easily comprehensible by non-expert users in image processing. The proposed approach is a two-stage algorithm. First, a SAR image partition is obtained in a non-supervised manner by using a statistical active grid based on the minimization of the stochastic complexity. Then, discriminative features (statistics, geometrics and texture parameters) are calculated for each extracted region in order to classify them in a semi-supervised manner. A hierarchical approach is adopted. In practice, the proposed algorithm provides an initial land use classification as well as confidence measures for each region. This initial classification can be used as an aid to the image interpretation or as a source of information for further processing
Bonneau, Stephane. "Chemins minimaux en analyse d’images : nouvelles contributions et applications à l’imagerie biologique." Paris 9, 2006. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2006PA090062.
Full textIntroduced first in image analysis to globally minimize the geodesic active contour functionnal, minimal paths techniques are robust tools for extracting open and closed contours from images. Minimal paths are computed by solving the Eikonal equation on a discrete grid with an efficient algorithm called Fast Marching. In this thesis, we present novel approaches based on minimal paths. The interest of these techniques is illustrated by the analysis of biological images. This thesis consists of three parts. In the first part, we review the relevant litterature in boundary-based deformable models and minimal paths techniques. In the second part, we propose a new approach for automatically detecting and tracking, in sequences of 2D fluorescence images, punctual objects which are intermittently visible. Trajectories of moving objects, considered as minimal paths in a spatiotemporal space, are retrieved using a perceptual grouping approach based on front propagation in the 2D+T volume. The third part adresses the problem of surface extraction in 3D images. First, we introduce a front propagation approach to distribute a set of points on a closed surface. Then, we propose a method to extract a surface patch from a single point by constructing a dense network of minimal paths. We finally present an extension of this method to extract a closed surface, in a fast and robust manner, from a few points lying on the surface
Bricq, Stéphanie. "Segmentation d’images IRM anatomiques par inférence bayésienne multimodale et détection de lésions." Université Louis Pasteur (Strasbourg) (1971-2008), 2008. https://publication-theses.unistra.fr/public/theses_doctorat/2008/BRICQ_Stephanie_2008.pdf.
Full textMedical imaging provides a growing number of data. Automatic segmentation has become a fundamental step for quantitative analysis of these images in many brain diseases such as multiple sclerosis (MS). We focused our study on brain MRI segmentation and MS lesion detection. At first we proposed a method of brain tissue segmentation based on hidden Markov chains taking into account neighbourhood information. This method can also include prior information provided by a probabilistic atlas and takes into account the artefacts appearing on MR images. Then we extended this method to detect MS lesions thanks to a robust estimator and prior information provided by a probabilistic atlas. We have also developed a 3D MRI segmentation method based on statistical active contours to refine the lesion segmentation. The results were compared with other existing methods of segmentation, and with manual expert segmentations
Randrianasoa, Tianatahina Jimmy Francky. "Représentation d'images hiérarchique multi-critère." Thesis, Reims, 2017. http://www.theses.fr/2017REIMS040/document.
Full textSegmentation is a crucial task in image analysis. Novel acquisition devices bring new images with higher resolutions, containing more heterogeneous objects. It becomes also easier to get many images of an area from different sources. This phenomenon is encountered in many domains (e.g. remote sensing, medical imaging) making difficult the use of classical image segmentation methods. Hierarchical segmentation approaches provide solutions to such issues. Particularly, the Binary Partition Tree (BPT) is a hierarchical data-structure modeling an image content at different scales. It is built in a mono-feature way (i.e. one image, one metric) by merging progressively similar connected regions. However, the metric has to be carefully thought by the user and the handling of several images is generally dealt with by gathering multiple information provided by various spectral bands into a single metric. Our first contribution is a generalized framework for the BPT construction in a multi-feature way. It relies on a strategy setting up a consensus between many metrics, allowing us to obtain a unified hierarchical segmentation space. Surprisingly, few works were devoted to the evaluation of hierarchical structures. Our second contribution is a framework for evaluating the quality of BPTs relying both on intrinsic and extrinsic quality analysis based on ground-truth examples. We also discuss about the use of this evaluation framework both for evaluating the quality of a given BPT and for determining which BPT should be built for a given application. Experiments using satellite images emphasize the relevance of the proposed frameworks in the context of image segmentation
Zhang, Peng. "Contribution des fonctions de croyance à la segmentation d’images tomodensitométriques thoraciques en radiothérapie conformationnelle." Rouen, 2006. http://www.theses.fr/2006ROUES054.
Full textImage segmentation is a process aiming at partitioning an image into several regions. Taking into account the importance of the imagery in medicine, there are many applications of the segmentation. One of them is the delineation of the organ at the risk and tumoral volume in conformational radiotherapy. However, the problems of image segmentation are complex due to the diversity of the imagery methods and biological tissues to segment, the weak contrast sometimes met and the presence of noise. Considering this diversity, we present in this thesis study, design and development of the tools for segmentation of Computed Tomography (CT) images aiming for conformational radiotherapy. The essential contribution of this work rests on the application of the belief functions theory in the segmentation of thoracic CT images. The method is named "credibilist labeling". To integrate the contextual information, we propose to take into account the space correlations between voxels in the data modeling by fusion of the information coming from the neighbors. Each voxel is considered as a particular vision, which brings thus partial information, supplemented or confirmed by the neighborhood voxels of the slice or the neighborhood slices. The major interest of the belief functions is it’s capacity to deal with uncertainty and imprecise data such as the grey levels in medical imagery, but also the definition of a mathematical frame allowing the fusion of information coming from several sources, here the neighborhood voxels. Based on this method, we developed the software SIPEC, for Segmentation d’Image Par Etiquetage Crédibiliste, allowing delineation of the patient’s contour, segmentation of the lungs and the spinal canal. We compared this software with clinical software ECLIPSETM (VARIAN v7. 1. 3) on 30 sets thoracic CT images. The results show that the tools proposed in clinic for segmentation rest on an algorithm much simpler and rapid than SIPEC (for example, the mean duration for the automatic segmentation of the lungs: ECLIPSE (3 minutes) vs. SIPEC (10 minutes)), but with the detriment of segmentation quality. So the number (for example, the mean manual corrections number for the lungs: ECLIPSE (43) vs. SIPEC (5)) and the importance of the manual corrections are quite less with software SIPEC than with the clinical software. It results the comparable total duration for segmentation of 3 volumes between the 2 tools (ECLIPSE (23 minutes) vs. SIPEC (20 minutes)), with the major advantage for SIPEC: being automatic and requiring few manual corrections. In addition, the treatment duration could be improved by ameliorating the algorithm and the speed of the processor. The prospects for this work are numerous. The segmentation technique could be applied to other imagery methods (MRI, PET…), but also to the multimodality imagery (PET-CT or other) by modeling the fusion of multimodality information
Morio, Jérôme. "Analyse d’images PolInSAR à l’aide de techniques statistiques et issues de la théorie de l’information." Aix-Marseille 3, 2007. http://www.theses.fr/2007AIX30052.
Full textHigh resolution airborne sensors (SAR) like RAMSES operated by the French Aerospace Lab (ONERA) are able to acquire multicomponent images PolInSAR with polarimetric and/or interferometric information on the scene lightened by the radar. This type of images has thus notably some environmental and agricultural applications (culture follow-up, forest height estimation). The complexity of PolInSAR images implies the implementation of elaborated methods based on statistics and on information theory (image partition technique based on the minimization of the stochastic complexity, Shannon entropy, Bhattacharyya distance) in order to estimate the contributions of radiometry, polarimetry and interferometry for the soil characterization and to determine the system components that bring the most useful information depending on its applications
Retornaz, Thomas. "Détection de textes enfouis dans des bases d’images généralistes : un descripteur sémantique pour l’indexation." Paris, ENMP, 2007. http://www.theses.fr/2007ENMP1511.
Full textMultimedia data bases, both personal and professional, are continuously growing and the need for automatic solutions becomes mandatory. Effort devoted by the research community to content-based image indexing is also growing, but the semantic gap is difficult to cross: the low level descriptors used for indexing are not efficient enough for an ergonomic manipulation of big and generic image data bases. The text present in a scene is usually linked to image semantic context and constitutes a relevant descriptor for content-based image indexing. In this thesis we present an approach to automatic detection of text from natural scenes, which tends to handle the text in different sizes, orientations, and backgrounds. The system uses a non linear scale space based on the ultimate opening operator (a morphological numerical residue). In a first step, we study the action of this operator on real images, and propose solutions to overcome these intrinsic limitations. In a second step, the operator is used in a text detection framework which contains additionally various tools of text categorisation. The robustness of our approach is proven on two different dataset. First we took part to ImagEval evaluation campaign and our approach was ranked first in the text localisation contest. Second, we produced result (using the same framework) on the free ICDAR dataset, the results obtained are comparable with those of the state of the art. Lastly, a demonstrator was carried out for EADS. Because of confidentiality, this work could not be integrated into this manuscript
Desquesnes, Xavier. "Propagation de fronts et p-laplacien normalisé sur graphes : algorithmes et applications au traitement d’images et de données." Caen, 2012. http://www.theses.fr/2012CAEN2073.
Full textThis work deals with the transcription of continuous partial derivative equations to arbitrary discrete domains by exploiting the formalism of partial difference equations defined on weighted graphs. In the first part, we propose a transcription of the normalized p-Laplacian operator to the graph domains as a linear combination between the non-local infinity Laplacian and the normalized Laplacian (both in their discrete version). This adaptation can be considered as a new class of p-Laplacian operators on graphs that interpolate between non-local infinity Laplacian and normalized Laplacian. In the second part, we present an adaptation of fronts propagation equations on weighted graphs. These equations are obtained by the transcription of the continuous level sets method to a discrete formulation on the graphs domain. Beyond the transcription in itself, we propose a very general formulation and efficient algorithms for the simultaneous propagation of several fronts on a single graph. Both transcription of the p-Laplacian operator and level sets method enable many applications in image segmentation and data clustering that are illustrated in this manuscript. Finally, in the third part, we present a concrete application of the different tools proposed in the two previous parts for computer aided diagnosis. We also present the Antarctic software that was developed during this PhD
Giraud, Remi. "Algorithmes de correspondance et superpixels pour l’analyse et le traitement d’images." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0771/document.
Full textThis thesis focuses on several aspects of image analysis and processing with non local methods. These methods are based on the redundancy of information that occurs in other images, and use matching algorithms, that are usually patch-based, to extract and transfer information from the example data. These approaches are widely used by the computer vision community, and are generally limited by the computational time of the matching algorithm, applied at the pixel scale, and by the necessity to perform preprocessing or learning steps to use large databases. To address these issues, we propose several general methods, without learning, fast, and that can be easily applied to different image analysis and processing applications on natural and medical images. We introduce a matching algorithm that enables to quickly extract patches from a large library of 3D images, that we apply to medical image segmentation. To use a presegmentation into superpixels that reduces the number of image elements, in a way that is similar to patches, we present a new superpixel neighborhood structure. This novel descriptor enables to efficiently use superpixels in non local approaches. We also introduce an accurate and regular superpixel decomposition method. We show how to evaluate this regularity in a robust manner, and that this property is necessary to obtain good superpixel-based matching performances
Robinault, Lionel. "Mosaïque d’images multi résolution et applications." Thesis, Lyon 2, 2009. http://www.theses.fr/2009LYO20039.
Full textThe thesis considers the of use motorized cameras with 3 degrees of freedom which are commonly called PTZ cameras. The orientation of such cameras is controlled according to two angles: the panorama angle (θ) describes the degree of rotation around on vertical axis and the tilt angle (ϕ) refers to rotation along a meridian line. Theoretically, these cameras can cover an omnidirectional field of vision of 4psr. Generally, the panorama angle and especially the tilt angle are limited for such cameras. In addition to control of the orientation of the camera, it is also possible to control focal distance, thus allowing an additional degree of freedom. Compared to other material, PTZ cameras thus allow one to build a panorama of very high resolution. A panorama is a wide representation of a scene built starting from a collection of images. The first stage in the construction of a panorama is the acquisition of the various images. To this end, we made a theoretical study to determine the optimal paving of the sphere with rectangular surfaces to minimize the number of zones of recovery. This study enables us to calculate an optimal trajectory of the camera and to limit the number of images necessary to the representation of the scene. We also propose various processing techniques which appreciably improve the rendering of the mosaic image and correct the majority of the defaults related to the assembly of a collection of images which were acquired with differing image capture parameters. A significant part of our work was used to the automatic image registration in real time, i.e. lower than 40ms. The technology that we developed makes it possible to obtain a particularly precise image registration with an computation time about 4ms (AMD1.8MHz). Our research leads directly to two proposed applications for the tracking of moving objects. The first involves the use of a PTZ camera and a spherical mirror. The combination of these two elements makes it possible to detect any motion object in the scene and to then to focus itself on one of them. Within the framework of this application, we propose an automatic algorithm of calibration of the system. The second application exploits only PTZ camera and allows the segmentation and the tracking of the objects in the scene during the movement of the camera. Compared to the traditional applications of motion detection with a PTZ camera, our approach is different by the fact that it compute a precise segmentation of the objects allowing their classification
Boudjelaba, Kamal. "Contribution à la conception des filtres bidimensionnels non récursifs en utilisant les techniques de l’intelligence artificielle : application au traitement d’images." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2015/document.
Full textThe design of finite impulse response (FIR) filters can be formulated as a non-linear optimization problem reputed to be difficult for conventional approaches. In order to optimize the design of FIR filters, we explore several stochastic methodologies capable of handling large spaces. We propose a new genetic algorithm in which some innovative concepts are introduced to improve the convergence and make its use easier for practitioners. The key point of our approach stems from the capacity of the genetic algorithm (GA) to adapt the genetic operators during the genetic life while remaining simple and easy to implement. Then, the Particle Swarm Optimization (PSO) is proposed for FIR filter design. Finally, a hybrid genetic algorithm (HGA) is proposed for the design of digital filters. The algorithm is composed of a pure genetic process and a dedicated local approach. Our contribution seeks to address the current challenge of democratizing the use of GAs for real optimization problems. Experiments performed with various types of filters highlight the recurrent contribution of hybridization in improving performance. The experiments also reveal the advantages of our proposal compared to more conventional filter design approaches and some reference GAs in this field of application
Castro, Miguel. "Navigation endovasculaire augmentée : mise en correspondance d’images pré- et per-opératoires." Rennes 1, 2010. http://www.theses.fr/2010REN1S183.
Full textOur work lies within the scope of the endovascular navigation (catheterization, stenting,. . . ), where complex difficulties related to the anatomical structural deformation arise when they are subjected to the introduction of relatively rigid tools (rigid guides, stent). The contribution of this thesis deals with the optimal use of intraoperative data in order to establish correspondence of preoperative (3D CT) and intraoperative (2D angiography) data within an augmented reality system for angionavigation. The establishment of this correspondence is based on the decomposition of the 3D/2D transformation (projective transformation plus 3D/3D rigid transformation) and algorithms for estimating parameters (intrinsic, extrinsic) under the constraints of the interventional environment. This approach involves a process of calibration for the intrinsic parameters of the C-arm, a decomposition of the 3D/3D rigid transformation into two transformations whereof the two sets of extrinsic parameters are, for one, a registration method restricted to the plane of the operating table, and can be, for the other, given by the imaging device or obtained through a 3D optical tracking system. The readjustment of the 3D model describing the initial nondeformed preoperative patient data is considered through a geometric method estimating deformations due to tool / tissue interactions and based on intraoperative observations. Acquisitions on phantoms under clinical conditions and real data were used to evaluate the proposed approach
Ruggieri, Vito Giovanni. "Analyse morphologique des bioprothèses valvulaires aortiques dégénérées par segmentation d’images TDM." Rennes 1, 2012. https://ecm.univ-rennes1.fr/nuxeo/site/esupversions/2be5652f-691e-4682-a0a0-e8db55bb95d9.
Full textThe aim of the study was to assess the feasibility of CT based 3D analysis of degenerated aortic bioprostheses to make easier their morphological assessment. This could be helpful during regular follow-up and for case selection, improved planning and mapping of valve-in-valve procedure. The challenge was represented by leaflets enhancement because of highly noised CT images. Contrast-enhanced ECG-gated CT scan was performed in patients with degenerated aortic bioprostheses before reoperation (in-vivo images). Different methods for noise reduction were tested and proposed. 3D reconstruction of bioprostheses components was achieved using stick based region segmentation methods. After reoperation, segmentation methods were applied to CT images of the explanted prostheses (ex-vivo images). Noise reduction obtained by improved stick filter showed best results in terms of signal to noise ratio comparing to anisotropic diffusion filters. All segmentation methods applied to in-vivo images allowed 3D bioprosthetic leaflets reconstruction. Explanted bioprostheses CT images were also processed and used as reference. Qualitative analysis revealed a good concordance between the in-vivo images and the bioprostheses alterations. Results from different methods were compared by means of volumetric criteria and discussed. ECG-gated CT images of aortic bioprostheses need a preprocessing to reduce noise and artifacts in order to enhance prosthetic leaflets. Stick region based segmentation seems to provide an interesting approach for the morphological characterization of degenerated bioprostheses
Atié, Michèle. "Perception des ambiances lumineuses d'architectures remarquables : analyse des impressions en situation réelle et à travers des photographies omnidirectionnelles dans un casque immersif." Electronic Thesis or Diss., Ecole centrale de Nantes, 2024. http://www.theses.fr/2024ECDN0047.
Full textThis thesis is at the crossroads of the fields of luminous atmospheres, architectural pedagogy, perception and immersion. It focuses on the design and implementation of a new experimental methodology for evaluating the ability of HDR stereoscopic omnidirectional static photographs, projected in an immersive Head-Mounted Display (HMD), to faithfully reproducesubjective impressions of luminous atmospheres experienced in reference architectural places. Specific consideration is given to the impact of tone mapping operators (TMOs). Our methodology involves several steps: designing a grid for analyzing the luminous atmospheres of iconic places based on expert judgement; implementing in situ data collection to assess luminous atmospheres (questionnaire, light measurements, HDR omnidirectional photographic recordings), and implementinga method for assessing luminous atmospheres in an HMD. The results provide knowledge about the characteristics of the in situ luminous atmospheres of seven iconic buildings and the perceptual fidelity of each luminous atmosphere’s impression in the HMD, depending on the TMOs. The findings also highlight the relationship between the impressions selected by the experts and those assessed in situ and in the HMD. This knowledge is useful for future pedagogical applications in architecture
Deslandes, François. "Modélisation de la dynamique des corps lipidiques chez Arabidopsis thaliana." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLA042.
Full textThe aim of the PhD project is to dissect the mechanisms involved in the phenotype of lipid droplets both at the cellular level and the seed level. Studying a model of a lipid droplet embedded in the leaflets of a lipid bilayer reveals the existence of a critical volume at which the shape of the droplet breaks from a symmetrical elongated lens to a spherical protrusion. This budding mechanisms provides new insights in the formation of lipid droplets. Segmentation and tracking of lipid droplets from timelapse confocal microscopy images allows the detection of lipid droplets fusion events. A method based on the conservation of volume during the fusion event is developed and applied to detect fusion events for several embryos of A. thaliana. A model of the coalescence of lipid droplets during the development of early A. thaliana embryos is developed. The fusion rate is estimated and compared in different wild type and mutant embryos. The estimation is based on lipid droplets volumes measured from images at different stages of the development of the embryos
Burte, Victor. "Étude des stratégies de mouvement chez les parasitoïdes du genre Trichogramma : apports des techniques d’analyse d’images automatiques." Electronic Thesis or Diss., Université Côte d'Azur (ComUE), 2018. http://theses.univ-cotedazur/2018AZUR4223.
Full textParasitoids of the genus Trichogramma are oophagous micro-hymenoptera widely used as biological control agents. My PhD is about the phenotypic characterization of this auxiliary's movement strategies, specifically the movements involved in the exploration of space and the search for host eggs. These phenotypes have great importance in the life cycle of trichogramma, and also of characters of interest to evaluate their effectiveness in biological control program. Trichogramma being very small organisms (less than 0.5 mm), difficult to observe, the study of their movement can take advantage of technological advances in the acquisition and automatic analysis of images. This is the strategy I followed by combining a methodological development component and an experimental component. In a first methodological part, I present three main types of image analysis methods that I used and helped to develop during my thesis. In a second time, I present three applications of these methods to the study of the movement of Trichogramma. First, we characterized in the laboratory the orientation preferences (phototaxis, geotaxis and their interaction) during egg laying in 22 trichogram strains belonging to 6 species. This type of study requires the counting of a large number of eggs (healthy and parasitized), it was developed a new dedicated tool in the form of an ImageJ / FIJI plugin made available to the community. This flexible plugin automates and makes more productive the tasks of counting and evaluation of parasitism rate, making possible screenings of greater magnitude. A great variability could be highlighted within the genus, including between strains of the same species. This suggests that depending on the plant layer to be protected (grass, shrub, tree), it would be possible to select trichogramma’s strains to optimize their exploitation of the targeted area. In a second time, we characterized the exploration strategies (velocities, trajectories, ...) of a set of 22 strains from 7 trichogramma species to look for traits specific to each strain or species. I implemented a method for tracking a trichogramma group on video recorded on short time scales using the Ctrax software and R scripts. The aim was to develop a protocol for high-throughput characterization of trichogramma strains movement and to study the variability of these traits within the genus. Finally, we conducted a study of the propagation dynamics in trichogramma group from the species T. cacoeciae, by developing an innovative experimental device to cover scales of time and space greater than those usually imposed by laboratory constraints. Through the use of pictures taken at very high resolution / low frequency and a dedicated analysis pipeline, the diffusion of individuals can be followed in a tunnel longer than 6 meters during a whole day. In particular, I was able to identify the effect of the population density as well as the distribution of resources on the propagation dynamics (diffusion coefficient) and the parasitism efficiency of the tested strain
Journet, Nicholas. "Analyse d’images de documents anciens : une approche texture." La Rochelle, 2006. http://www.theses.fr/2006LAROS178.
Full textMy phd thesis subject is related to the topic of old documents images indexation. The corpus of old documents has specific characteristics. The content (text and image) as well as the layout information are strongly variable. Thus, it is not possible to work on this corpus such as it usually done with contemporary documents. Indeed, the first tests which we realised on the corpus of the “Centre d’Etude de la Renaissance”, with which we work, confirmed that the traditional approaches (driven –model approaches) are not very efficient because it’s impossible to put assumptions on the physical or logical structure of the old documents. We also noted the lack of tools allowing the indexing of large old documents images databases. In this phd work, we propose a new generic method which permits characterization of the contents of old documents images. This characterization is carried out using a multirésolution study of the textures contained in the images of documents. By constructing signatures related with the frequencies and the orientations of the various parts of a page it is possible to extract, compare or to identify different kind of semantic elements (reference letters, illustrations, text, layout. . . ) without making any assumptions about the physical or logical structure of the analyzed documents. These textures information are at the origin of creation of indexing tools for large databases of old documents images
Derache, François. "Segmentation d’images échographiques pour applications spatiale." Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7330.
Full textMedical ultrasound scan is a common and relevant medical imaging process for medical monitoring and research. Non-invasive and low-cost, it also provides real-time images without ionizing radiations. As the space exploration will guide us further away from Earth, to the Moon, Mars and beyond, delay will impact real time interaction (Mars up to 40min delay) and tele operated system will no longer be an option, especially for astronauts medical monitoring and research protocols. Communication delays with the crew will extend, band coverage will rarefy, and lot of other challenges will appear. Furthermore, human physiological response will be a new field of investigation to make sure that astronaut underwent a safe evolution; do not develop any pathology that could affect the mission performances. To facilitate long distance sonography allowing any inexperienced user to perform an organ scan for a later post analysis by a professional, autonomous ultrasounds device, integrating automated detection of targeted organs is the next step. Our method suggests an innovative solution to identify, analyze, and segment organs in ultrasounds scans based on the greyscale study through a 1 dimensional approach. The method consists in analyzing a volume of images captured during volumic scans, identifying the organ, displaying the long and short axis views of it. The method will allow a distant expert sonographer to deliver a reliable medical diagnostic remotely. Our method analyzes, detects and segments organs in ultrasounds scan based on the greyscale variation along a one-dimensional segment in a 3D volume using a common 2D scanning probe
Salmeron, Eva. "Mise en coïncidence automatique des contours extraits d’images aériennes et d’éléments cartographiques." Compiègne, 1986. http://www.theses.fr/1986COMPD018.
Full textVulgarakis, Minov Sofija. "Integration of imaging techniques for the quantitative characterization of pesticide sprays." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS068/document.
Full textIn recent years, advances in plant protection have contributed considerably to increasing crop yields in a sustainable way. Easy to apply and rather inexpensive, pesticides have proven to be very efficient. However, when pesticides are applied to crops some of the spray may not reach the target, but move outside the intended spray area. This can cause serious economic and environmental problems. Most of the pesticides are applied using agricultural sprayers. These sprayers use hydraulic nozzles which break the liquid into droplets with a wide range of droplet sizes and velocities and determine the spray pattern. Small droplets are prone to wind drift, while large droplets can runoff from the target surface and deposit on the soil. Therefore, efforts are being undertaken to come to a more sustainable use of pesticides which is more and more regulated by international environmental laws. One of the main challenges is to reduce spray losses and maximize spray deposition and efficacy by improving the spray characteristics and the spray application process. Because mechanisms of droplets leaving a hydraulic spray nozzle are very complex and difficult to quantify or model, there is a need for accurate quantification techniques. The recent improvements in digital image processing, sensitivity of imaging systems and cost reduction have increased the interest in high-speed (HS) imaging techniques for agricultural applications in general and for pesticide applications in specific. This thesis focused on the development and application of high speed imaging techniques to measure micro (droplet size and velocity) and macro (spray angle and shape, liquid sheet length) spray characteristics.The general aim was to show that the spray characteristics from agricultural spray nozzles can be measured correctly with the developed imaging techniques in a non-intrusive way. After a review of the spray application process and techniques for spray characterization (Chapter 2), two image acquisition systems were developed in Chapter 3 based on single droplet experiments using a high speed camera and a piezoelectric droplet generator. 58 combinations of lenses, light sources, diffusers, and exposure times were tested using shadowgraph (background) imaging and evaluated based on image quality parameters (signal to noise rate, entropy ratio and contrast ratio), light stability and overexposure ratio and the accuracy of the droplet size measurement. These resulted into development of two image acquisition systems for measuring the macro and micro spray characteristics. The HS camera with a macro video zoom lens at a working distance of 143 mm with a larger field of view (FOV) of 88 mm x 110 mm in combination with a halogen spotlight and a diffuser was selected for measuring the macro spray characteristics (spray angle, spray shape and liquid sheet length). The optimal set-up for measuring micro spray characteristics (droplet size and velocity) consisted of a high speed camera with a 6 μs exposure time, a microscope lens at a working distance of 430 mm resulting in a FOV of 10.5 mm x 8.4 mm, and a xenon light source used as a backlight without diffuser. In Chapter 4 image analysis and processing algorithms were developed for measuring single droplet characteristics (size and velocity) and different approaches for image segmentation were presented. With the set-up for micro spray characterization and using these dedicated image analysis algorithms (Chapter 4), measurements using a single droplet generator in droplet on demand (DOD) and continuous mode were performed in Chapter 5. The effects of the operating parameters, including voltage pulse width and pulse amplitude with 4 nozzle orifice sizes (261 μm, 123 μm, 87 μm and 67 μm) on droplet diameter and droplet velocity have been characterized (...)
Bouraoui, Bessem. "Segmentation automatique de l’arbre coronarien à partir d’images angiographiques 3D+T de scanner." Strasbourg, 2009. http://www.theses.fr/2009STRA6171.
Full textThe objective of this thesis is to segment automatically the coronary arteries in images of scanner X. The images do not comprise only the heart, but also all the trunk of the body. A first stage consisted in removing any other structure than the heart in the image. An extraction of the aorta appeared necessary to us, then a localization of the germs of the coronary arteries will be carried out on the wall of this aorta. Once these germs are detected, an application of region growth is carried out, with a criterion of acceptance based on the Hit-ot-Miss transform. We based ourselves on a mathematical morphology operator, the Hit-ot-Miss transform. We combined his extension to the gray levels, with the blur alternative, which made our contribution to mathematical morphology. This work contributes to the evolution and the development of the vascular segmentation on two plans. In pratical terms of the contribution, three fully automatic algorithms were worked out, a first one to segment the heart, a second one to segment the aorta, and a third one for the segmentation of the coronary arteries. These algorithms have encouraging results, validated by an expert in cardiology, with 90% of correct results, the 10% remainders correspond to images of bad quality. In terms of methodology, this work allowed to integrate an new approach of segmentation, consisting in guiding the tools for image treatment by a priori knowledge, like her anatomical knowledge
Bediaf, Houda. "Quantification et modélisation par traitement d'images de la répartition des produits pulvérisés à l'échelle de la feuille en fonction de son état de surface et la nature du produit." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS005/document.
Full textIn the context of agricultural spraying, reducing the amount of input became a crucial step particularly in viticulture. The development of spraying precision in this domain needs the mastery of the use of spray equipment, product and distribution of these products on the foliage. In this area, many research have been done, their main goal being to optimize the use of plant product protection and to reduce significantly the input quantity inside the culture. However, few research has been done on the behavior of the product directly on the foliage which constitutes finally the main goal of this thesis. The first part of this report deals particularly with the analysis of leaf surface state by focusing precisely on the leaf surface roughness, one of the main parameters in product adhesion process. A leaf surface analysis is performed by determining the textural features extracted from microscopic images. A new roughness indicator is proposed and, spatial and frequency parameters were used to estimate and characterize the leaf roughness. These parameters allow both the characterization of surface homogeneity and the detection of the presence of rib/hair on the leaf surface. Indeed, this part represents a fundamental basis for understanding the spray droplet behavior on the vine leaf. The second part of this thesis deals with experimental studies which aim to define and to create statistical models to estimate the amount of product remaining on the leaf surface or the surface occupied by droplets. These models consider different spray parameters, such as droplet size and velocity, surface tension of the product, slope angle and roughness of the leaf. These models could be seen as aid-decision tools to optimize the amount of spray and the estimated product remaining on the leaf
Chérigui, Safa. "Techniques de codage d’images basées représentations parcimonieuses de scènes et prédiction spatiale multi-patches." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S135/document.
Full textIn recent years, video compression field has increased significantly since the apparition of H.264/AVC standard and of its successor HEVC. Spatial prediction in these standards are based on the unidirectional propagation of neighboring pixels. Although very effective to extend pattern with the same characteristics, this prediction has limited performances to extrapolate complex textures. This thesis aims at exploring new spatial prediction schemes to improve the current intra prediction techniques, by extending these local schemes to global, multidimensional and multi-patches schemes. A hybrid prediction method based on template and block matching is first investigated. This hybrid approach is then extended to multi-patches prediction of type "Neighbor Embedding" (NE). The other part of this thesis is dedicated to the study of epitome image within the scope of image compression. The idea is to exploit spatial redundancies within the original image in order to first extract a summary image containing the texture patches the most representative of the image, and then use this compacted representation to rebuild the original image. The concept of epitome has been incorporated in two compression schemes, one of these algorithms is in rupture with the traditional techniques since the image blocks are processed, both at encoder and decoder sides, in a spatial order that depends on the image content and this in the interest of propagating image structures. In this last compression algorithm, extended H.264 Intra directional prediction modes and advanced multi-patches prediction methods have been also included. These different solutions have been integrated in an H.264/AVC encoder in order to assess their coding performances with respect to H.264 intra modes and the state of the art relative to these different techniques
Maaloul, Boutheina. "Des algorithmes de détection d'accidents routiers par vidéo surveillance." Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0028.
Full textAutomatic video surveillance systems have been developed to detect and analyze abnormal behavior or situation of risk in many fields reducing human monitoring of activities captured by cameras (security surveillance, abnormal behavior detection, etc.). One of the applications of video surveillance is the traffic monitoring. Analyzing the motion in roads aims to detect abnormal traffic behavior and sudden events, especially in case of Emergency and Disaster Management (EDM). Road accidents can cause serious injuries affecting mostly the head and the brain, leading to lifelong disabilities and even death; each additional rescue minute can mean the difference between life and death as revealed by the golden Hour[Lerner et al., 2001]. Therefore, providing a rapid assistance for injuries is mandatory. Moreover, if not addressed promptly, accidents may cause traffic jams, eventually leading to more accidents, and even greater loss of lives and properties. Many cities in France are equipped with video surveillance cameras installed on different roads and highways. Traffic monitoring is done by human operators to visualize the congestion of a road or to measure the flow of the traffic. The video stream of this existing network of cameras is delivered unprocessed to the traffic management center. Thus, there are no video storage of accident scenes. In addition, there is no associated technology for a rapid emergency management. Therefore, it is important to design a system able toorganizean effective emergency response. This response should be based, firstly on an automatic detection by video analysis, then, on a rapid notification allowing the optimization of the emergency intervention itinerary without affecting the traffic state. Our work resolves the first part of the emergency response.The objectives of this thesis are firstly the identification of accident scenarios and the collection of data related to road accident; next, the design and the development of video processing algorithms for the automatic detection of accidents in highways. The developed solutions will use the existing fixed cameras, so as not to require significant investments in infrastructure. The core of the proposed approaches will focus on the use of the dense Optical Flow (OF) algorithm [Farnebäck, 2003] and heuristic computations for features extraction and accident recognition. The purpose of the dense OF is to estimate the motion of each pixel in a region of interest (ROI) between two given frames. At the output of the dense OF, a dense features could be extracted which is more performant than features extracted at some points. Defining thresholds for accident detection in various environment is very challenging. Therefore, studying the motion at a global scale in the image, allows defining a dynamic thresholds for accident detection using statistic computations. The proposed solution is sufficient and robust to noise and light changing
Mansouri, Abdelkhalek. "Generic heuristics on GPU to superpixel segmentation and application to optical flow estimation." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA012.
Full textFinding clusters in point clouds and matching graphs to graphs are recurrent tasks in computer science domain, data analysis, image processing, that are most often modeled as NP-hard optimization problems. With the development and accessibility of cheap multiprocessors, acceleration of the heuristic procedures for these tasks becomes possible and necessary. We propose parallel implantation on GPU (graphics processing unit) system for some generic algorithms applied here to image superpixel segmentation and image optical flow problem. The aim is to provide generic algorithms based on standard decentralized data structures to be easy to improve and customized on many optimization problems and parallel platforms.The proposed parallel algorithm implementations include classical k-means algorithm and application of minimum spanning forest computation for super-pixel segmentation. They include also a parallel local search procedure, and a population-based memetic algorithm applied to optical flow estimation based on superpixel matching. While data operations fully exploit GPU, the memetic algorithm operates like a coalition of processes executed in parallel on the multi-core CPU and requesting GPU resources. Images are point clouds in 3D Euclidean space (space-gray value domain), and are also graphs to which are assigned processor grids. GPU kernels execute parallel transformations under CPU control whose limited role only consists in stopping criteria evaluation or sequencing transformations.The presented contribution contains two main parts. Firstly, we present tools for superpixel segmentation. A parallel implementation of the k-means algorithm is presented with application to 3D data. It is based on a cellular grid subdivision of 3D space that allows closest point findings in constant optimal time for bounded distributions. We present an application of the parallel Boruvka minimum spanning tree algorithm to compute watershed minimum spanning forest. Secondly, based on the generated superpixels and segmentation, we derive parallel optimization procedures for optical flow estimation with edge aware filtering. The method includes construction and improvement heuristics, as winner-take-all and parallel local search, and their embedding into a population-based metaheuristic framework. The algorithms are presented and evaluated in comparison to state-of-the-art algorithms
Shao, Clémentine. "Images and models for decision support in aortic dissection surgery." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S121.
Full textThe clinical decision concerning the treatment of type B aortic dissection is still controversial in some configurations. CFD approachs were investigated to assess the hemodynamics in a noninvasive way. In this context, we : i) proposeda semi-automatic method for the segmentation of aortic dissections ; ii) implemented a CFD model using a novel method for the definition of the boundary conditions ; iii) created reduced order models from 3D dynamic fluid simulations. These models allow to calculate in real time the wall shear stress and pressure for different clinical scenarios
Baconnais, Maxime. "Méthode intégrée de corrélation d’images et de corrélation d’images virtuelles." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0069.
Full textDigital Image Correlation (DIC) is now com-monly used in academic and industrial settings. In-deed, this method allows to measure the displace-ment field of a surface with high accuracy, good reso-lution and with a simple experimental setup. Howe-ver, image correlation does not allows accurate mea-surement in border areas, sample edges and cracks.The objective of this thesis is to use the VirtualImage Correlation (VIC) method to measure the po-sition of the edges and improve the accuracy of theDIC in these areas. The proposed strategy is basedon three points : the creation of a adapted measure-ment mesh to the geometry, the generation of a pixelmask to remove the edge pixels and the constrainedresolution of the DIC.Different test cases on synthetic images and ex-perimental data show the interest of the integra-ted method. First, the knowledge of the initial posi-tion of the border allows the automatic creation ofan adapted mesh. It is also shown that the simpleuse of a pixel mask reduces significantly the boun-dary error, both in synthetic and real cases. For thecase of constrained resolution, it is shown that itreduces measurement errors in synthetic cases. Ho-wever, this result could not be confirmed in the ap-plication cases, due to the quality of the boundarydoes not allows an accurate measurement and thusan improvement of the DIC results
Samrouth, Khouloud. "Représentation et compression à haut niveau sémantique d’images 3D." Thesis, Rennes, INSA, 2014. http://www.theses.fr/2014ISAR0025/document.
Full textDissemination of multimedia data, in particular the images, continues to grow very significantly. Therefore, developing effective image coding schemes remains a very active research area. Today, one of the most innovative technologies in this area is the 3D technology. This 3D technology is widely used in many domains such as entertainment, medical imaging, education and very recently in criminal investigations. There are different ways of representing 3D information. One of the most common representations, is to associate a depth image to a classic colour image called texture. This joint representation allows a good 3D reconstruction, as the two images are well correlated, especially along the contours of the depth image. Therefore, in comparison with conventional 2D images, knowledge of the depth of field for 3D images provides an important semantic information about the composition of the scene. In this thesis, we propose a scalable 3D image coding scheme for 2D + depth representation with advanced functionalities, which preserves all the semantics present in the images, while maintaining a significant coding efficiency. The concept of preserving the semantics can be translated in terms of features such as an automatic extraction of regions of interest, the ability to encode the regions of interest with higher quality than the background, the post-production of the scene and the indexing. Thus, firstly we introduce a joint and scalable 2D plus depth coding scheme. First, texture is coded jointly with depth at low resolution, and a method of depth data compression well suited to the characteristics of the depth maps is proposed. This method exploits the strong correlation between the depth map and the texture to better encode the depth map. Then, a high resolution coding scheme is proposed in order to refine the texture quality. Next, we present a global fine representation and contentbased coding scheme. Therefore, we propose a representation and coding scheme based on "Depth of Interest", called "3D Autofocus". It consists in a fine extraction of objects, while preserving the contours in the depth map, and it allows to automatically focus on a particular depth zone, for a high rendering quality. Finally, we propose 3D image segmentation, providing a high consistency between colour, depth and regions of the scene. Based on a joint exploitation of the colour and depth information, this algorithm allows the segmentation of the scene with a level of granularity depending on the intended application. Based on such representation of the scene, it is possible to simply apply the same previous 3D Autofocus, for Depth of Interest extraction and coding. It is remarkable that both approaches ensure a high spatial coherence between texture, depth, and regions, allowing to minimize the distortions along object of interest's contours and then a higher quality in the synthesized views
Baudin, Pierre-Yves. "De la segmentation au moyen de graphes d’images de muscles striés squelettiques acquises par RMN." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2013. http://www.theses.fr/2013ECAP0033/document.
Full textSegmentation of magnetic resonance images (MRI) of skeletal striated muscles is of crucial interest when studying myopathies. Diseases understanding, therapeutic followups of patients, etc. rely on discriminating the muscles in MRI anatomical images. However, delineating the muscle contours manually is an extremely long and tedious task, and thus often a bottleneck in clinical research. Typical automatic segmentation methods rely on finding discriminative visual properties between objects of interest, accurate contour detection or clinically interesting anatomical points. Skeletal muscles show none of these features in MRI, making automatic segmentation a challenging problem. In spite of recent advances on segmentation methods, their application in clinical settings is difficult, and most of the times, manual segmentation and correction is still the only option. In this thesis, we propose several approaches for segmenting skeletal muscles automatically in MRI, all related to the popular graph-based Random Walker (RW) segmentation algorithm. The strength of the RW method relies on its robustness in the case of weak contours and its fast and global optimization. Originally, the RW algorithm was developed for interactive segmentation: the user had to pre-segment small regions of the image – called seeds – before running the algorithm which would then complete the segmentation. Our first contribution is a method for automatically generating and labeling all the appropriate seeds, based on a Markov Random Fields formulation integrating prior knowledge of the relative positions, and prior detection of contours between pairs of seeds. A second contribution amounts to incorporating prior knowledge of the shape directly into the RW framework. Such formulation retains the probabilistic interpretation of the RW algorithm and thus allows to compute the segmentation by solving a large but simple sparse linear system, like in the original method. In a third contribution, we propose to develop a learning framework to estimate the optimal set of parameters for balancing the contrast term of the RW algorithm and the different existing prior models. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the medical images, instead of the optimal probabilistic segmentation, which corresponds to the desired output of the RW algorithm. We overcome this challenge by treating the optimal probabilistic segmentation as a latent variable. This allows us to employ the latent Support Vector Machine (latent SVM) formulation for parameter estimation. All proposed methods are tested and validated on real clinical datasets of MRI volumes of lower limbs
Haj, Hassan Hawraa. "Détection et classification temps réel de biocellules anormales par technique de segmentation d’images." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0043.
Full textDevelopment of methods for help diagnosis of the real time detection of abnormal cells (which can be considered as cancer cells) through bio-image processing and detection are most important research directions in information science and technology. Our work has been concerned by developing automatic reading procedures of the normal and abnormal bio-images tissues. Therefore, the first step of our work is to detect a certain type of abnormal bio-images associated to many types evolution of cancer within a Microscopic multispectral image, which is an image, repeated in many wavelengths. And using a new segmentation method that reforms itself in an iterative adaptive way to localize and cover the real cell contour, using some segmentation techniques. It is based on color intensity and can be applied on sequences of objects in the image. This work presents a classification of the abnormal tissues using the Convolution neural network (CNN), where it was applied on the microscopic images segmented using the snake method, which gives a high performance result with respect to the other segmentation methods. This classification method reaches high performance values, where it reaches 100% for training and 99.168% for testing. This method was compared to different papers that uses different feature extraction, and proved its high performance with respect to other methods. As a future work, we will aim to validate our approach on a larger datasets, and to explore different CNN architectures and the optimization of the hyper-parameters, in order to increase its performance, and it will be applied to relevant medical imaging tasks including computer-aided diagnosis
Burte, Victor. "Étude des stratégies de mouvement chez les parasitoïdes du genre Trichogramma : apports des techniques d’analyse d’images automatiques." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4223/document.
Full textParasitoids of the genus Trichogramma are oophagous micro-hymenoptera widely used as biological control agents. My PhD is about the phenotypic characterization of this auxiliary's movement strategies, specifically the movements involved in the exploration of space and the search for host eggs. These phenotypes have great importance in the life cycle of trichogramma, and also of characters of interest to evaluate their effectiveness in biological control program. Trichogramma being very small organisms (less than 0.5 mm), difficult to observe, the study of their movement can take advantage of technological advances in the acquisition and automatic analysis of images. This is the strategy I followed by combining a methodological development component and an experimental component. In a first methodological part, I present three main types of image analysis methods that I used and helped to develop during my thesis. In a second time, I present three applications of these methods to the study of the movement of Trichogramma. First, we characterized in the laboratory the orientation preferences (phototaxis, geotaxis and their interaction) during egg laying in 22 trichogram strains belonging to 6 species. This type of study requires the counting of a large number of eggs (healthy and parasitized), it was developed a new dedicated tool in the form of an ImageJ / FIJI plugin made available to the community. This flexible plugin automates and makes more productive the tasks of counting and evaluation of parasitism rate, making possible screenings of greater magnitude. A great variability could be highlighted within the genus, including between strains of the same species. This suggests that depending on the plant layer to be protected (grass, shrub, tree), it would be possible to select trichogramma’s strains to optimize their exploitation of the targeted area. In a second time, we characterized the exploration strategies (velocities, trajectories, ...) of a set of 22 strains from 7 trichogramma species to look for traits specific to each strain or species. I implemented a method for tracking a trichogramma group on video recorded on short time scales using the Ctrax software and R scripts. The aim was to develop a protocol for high-throughput characterization of trichogramma strains movement and to study the variability of these traits within the genus. Finally, we conducted a study of the propagation dynamics in trichogramma group from the species T. cacoeciae, by developing an innovative experimental device to cover scales of time and space greater than those usually imposed by laboratory constraints. Through the use of pictures taken at very high resolution / low frequency and a dedicated analysis pipeline, the diffusion of individuals can be followed in a tunnel longer than 6 meters during a whole day. In particular, I was able to identify the effect of the population density as well as the distribution of resources on the propagation dynamics (diffusion coefficient) and the parasitism efficiency of the tested strain
Aderghal, Karim. "Classification of multimodal MRI images using Deep Learning : Application to the diagnosis of Alzheimer’s disease." Thesis, Bordeaux, 2021. http://www.theses.fr/2021BORD0045.
Full textIn this thesis, we are interested in the automatic classification of brain MRI images to diagnose Alzheimer’s disease (AD). We aim to build intelligent models that provide decisions about a patient’s disease state to the clinician based on visual features extracted from MRI images. The goal is to classify patients (subjects) into three main categories: healthy subjects (NC), subjects with mild cognitive impairment (MCI), and subjects with Alzheimer’s disease (AD). We use deep learning methods, specifically convolutional neural networks (CNN) based on visual biomarkers from multimodal MRI images (structural MRI and DTI), to detect structural changes in the brain hippocampal region of the limbic cortex. We propose an approach called "2-D+e" applied to our ROI (Region-of-Interest): the hippocampus. This approach allows extracting 2D slices from three planes (sagittal, coronal, and axial) of our region by preserving the spatial dependencies between adjacent slices according to each dimension. We present a complete study of different artificial data augmentation methods and different data balancing approaches to analyze the impact of these conditions on our models during the training phase. We propose our methods for combining information from different sources (projections/modalities), including two fusion strategies (early fusion and late fusion). Finally, we present transfer learning schemes by introducing three frameworks: (i) a cross-modal scheme (using sMRI and DTI), (ii) a cross-domain scheme that involves external data (MNIST), and (iii) a hybrid scheme with these two methods (i) and (ii). Our proposed methods are suitable for using shallow CNNs for multimodal MRI images. They give encouraging results even if the model is trained on small datasets, which is often the case in medical image analysis
Halawana, Hachem. "Dématriçage partiel d’images CFA pour la mise en correspondance stéréoscopique couleur." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10149/document.
Full textMost color stereovision setups include single-sensor cameras which provide Color Filter Array (CFA) images. In those images, a single color component is sampled at each pixel rather than the three required ones (R,G,B). We show that standard demosaicing techniques, used to determine the two missing color components, are not well adapted when the resulting color pixels are compared for estimating the disparity map. In order to avoid this problem while exploiting color information, we propose a partial demosaicing designed for dense stereovision based on pairs of Bayer CFA images. Finally, experimental results obtained with benchmark stereo image pairs show that stereo matching applied to partially demosaiced images outperforms stereo matching applied to standard demosaiced images