Indice
Letteratura scientifica selezionata sul tema "Traitement d'image vectorielle"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Traitement d'image vectorielle".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Tesi sul tema "Traitement d'image vectorielle"
Even, Melvin. "Animation 2D esquissée assistée par ordinateur". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0457.
Testo completoTraditional 2D animation is a time-intensive process requiring a high level of expertise, as animators must draw thousands of individual frames by hand to create a complete animation sequence. With the advent of computer animation, artists now have the option to use 3D animation or 2D puppet-style techniques, which eliminate the need to manually draw each frame but often sacrifice the unique workflows and artistic style characteristic of traditional 2D animation. Alternatively, digital tools that emulate traditional 2D animation streamline tasks like drawing and coloring, but still require animators to draw each frame by hand. In academia, many computer-assisted methods have been developed to automate the inbetweening of clean line drawings, in which intermediate frames are automatically generated from input key drawings. However, these methods tend to be constrained by rigid workflows and limited artistic control, largely due to the challenges of stroke matching and interpolation. In this thesis, we take a novel approach to address these limitations by focusing on an earlier stage of animation using rough drawings (i.e., sketches). Our key innovation is to recast the matching and interpolation problems using transient embeddings, which consist in groups of strokes that exist temporarily across keyframes. A transient embedding carries strokes between keyframes both forward and backward in time through a sequence of transformed lattices. Our system generates rough inbetweens in real-time allowing artists to preview and edit their animation using tools that offer precise artistic control over the dynamics of the animation. This ensures smooth continuity of motion, even when complex topological changes are introduced and enables non-linear exploration of movements. We demonstrate these capabilities on state-of-the-art 2D animation examples. Another notoriously difficult task is the representation 3D motion and depth through 2D animated drawings. Artists must pay particular attention to occlusions and how they evolve through time, a tedious process. Computer-assisted inbetweening methods such as cut-out animation tools allow for such occlusions to be handled beforehand using a 2D rig, at the expense of flexibility and artistic expression. We also address occlusion handling without sacrificing non-linear control. Our contribution in that matter is two-fold: a fast method to compute 2D masks from rough drawings with a semi-automatic dynamic layout system for occlusions between drawing parts; and an artist-friendly method to both automatically and manually control the dynamic visibility of strokes for self-occlusions. Our system helps artists produce convincing 3D-like 2D animations, including head turns, foreshortening effects, out-of-plane rotations, overlapping volumes and even transparency
Chanussot, Jocelyn. "Approches vectorielles ou marginales pour le traitement d'images multi-composantes". Chambéry, 1998. http://www.theses.fr/1998CHAMS025.
Testo completoChevaillier, Béatrice. "Analyse de données d'IRM fonctionnelle rénale par quantification vectorielle". Electronic Thesis or Diss., Metz, 2010. http://www.theses.fr/2010METZ005S.
Testo completoDynamic-Contrast-Enhanced Magnetic Resonance Imaging has a great potential for renal function assessment but has to be evaluated on a large scale before its clinical application. Registration of image sequences and segmentation of internal renal structures is mandatory in order to exploit acquisitions. We propose a reliable and user-friendly tool to partially automate these two operations. Statistical registration methods based on mutual information are tested on real data. Segmentation of cortex, medulla and cavities is performed using time-intensity curves of renal voxels in a two step process. Classifiers are first built with pixels of the slice that contains the largest proportion of renal tissue : two vector quantization algorithms, namely the K-means and the Growing Neural Gas with targeting, are used here. These classifiers are first tested on synthetic data. For real data, as no ground truth is available for result evaluation, a manual anatomical segmentation is considered as a reference. Some discrepancy criteria like overlap, extra pixels and similarity index are computed between this segmentation and functional one. The same criteria are also evaluated between the referencee and another manual segmentation. Results are comparable for the two types of comparisons. Voxels of other slices are then sorted with the optimal classifier. Generalization theory allows to bound classification error for this extension. The main advantages of functional methods are the following : considerable time-saving, easy manual intervention, good robustness and reproductibility
Lee, Hyun-Soo. "Étude comparative de la classification de textures fondée sur la représentation vectorielle". Le Havre, 1992. http://www.theses.fr/1992LEHA0002.
Testo completoChevaillier, Béatrice. "Analyse de données d'IRM fonctionnelle rénale par quantification vectorielle". Phd thesis, Université de Metz, 2010. http://tel.archives-ouvertes.fr/tel-00557235.
Testo completoRafaa, Az-Eddine. "Représentation multirésolution et compression d'images : ondelettes et codage scalaire et vectoriel". Metz, 1994. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1994/Rafaa.Az_Eddine.SMZ9447.pdf.
Testo completoWavelet pyramidal is a new form of image representation. Its coding scheme allows a new method of image data compression. This technical may well supplant the well established block-transform methods which have been state of art for these last years. In a suitable purpose, this work is achieved in three parts. The first part provides theorical and historical synthesis of multiresolution representation which converge into Wavelets pyramidal representation of images. The second part investigate scalar point of view of different pyramid components : statistical characteristics, coding scheme with scalar quantization or with dead-zone, related to optimal bit allocation whether analytical or algorithmic. Finally, the third part deals with vector coding scheme. The pyramid is then dispatched into elementary entities which are performed on a hierarchical vector tree as which we applied an hybrid coding based on QS and QV. This structure allows local directivity and frequency activity description in image, and then an introduction to the notion of activity classification. To extend the validity of the elaborated encoder, extensions to colours images and images sequences was presented too
Akrout, Nabil. "Contribution à la compression d'images par quantification vectorielle : algorithmes et circuit intégré spécifique". Lyon, INSA, 1995. http://www.theses.fr/1995ISAL0017.
Testo completoRecently, Vector Quantization (VQ) has received considerable attention and become an effective tool for image compression. It provides High compression ratio and simple decoding process. However, studies on practical implementation of VQ have revealed some major difficulties such as edge integrity and code book design efficiency. After doing the state-of-the-art in the field of Vector Quantization, we focus on: - Iterative and non-iterative code book generation algorithms. The main idea of non-iterative algorithms consists of creating progressively the code words, during only one scanning of the training set. At the beginning, the code book is initialized by the first vector found in the training set, then each input vector is mapped into the nearest neighbor codeword, which minimizes the distortion error. This error is then compared with per-defined thresholds. The performance of iterative and non-iterative code book generation algorithms are compared: the code books generated by non-iterative algorithms require less than 2 percent of the time required by iterative algorithms. - To propose a new procedure for image compression as an improvement on vector quantization of sub-bands. Codewords having vertical and horizontal shape will be used to vector quantize the high-frequency sub-images, obtained from a multiresolution analysis scheme. The codewords shapes take into account the orientation and resolution of each subband details in order to -preserve edges at low bit rates. Their sizes are defined according to the correlation distances in each subband for the horizontal and vertical directions. - The intensive computational demands of vector quantization (V. Q. ) for important applications in speech and image compression have motivated the need for dedicated processors with very high throughput capabilities. Bit-serial systolic architectures offer one of the most promising approaches for fulfilling the demanding V. Q. Speed requirements in many applications. We propose a novel family of architectural techniques which offer efficient computation of Manhattan distance measures for nearest neighbor code book searching. Manhattan distance requires much less computation and VLSI chip area, because there is no multiplier. Compared to Euclidean distance. This gave rise to the idea of implementing Manhattan distance directly in hardware for real-time image coding. Very high V. Q. Throughput can be achieved by a massive parallelism. Therefore, it requires an important VLSI chip area. To avoid this difficulty, we carry out a "bit-serial" pipelined processing for nearest neighbor code book searching. This architecture is more suitable for real-time coding. Several alternative configurations allow reasonable tradeoffs between speed and VLSI chip area required
Alshatti, Wael. "Approches vectorielles du filtrage et de la détection de contours dans des images multi-spectrales". Chambéry, 1994. http://www.theses.fr/1994CHAMS022.
Testo completoFurlan, Gilbert. "Contribution à l'étude et au développement d'algorithmes de traitement du signal en compression de données et d'images". Nice, 1990. http://www.theses.fr/1990NICE4433.
Testo completoMoravie, Philippe. "Parallélisation d'une méthode de compression d'images : transformée en ondelettes, quantification vectorielle et codage d'Huffman". Toulouse, INPT, 1997. http://www.theses.fr/1997INPT123H.
Testo completo