Dissertations / Theses on the topic 'Vues multiples'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 28 dissertations / theses for your research on the topic 'Vues multiples.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Carrasco, Miguel. "Vues Multiples non-calibrées : Applications et Méthodologies." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00646709.
Full textDurand, Stéphane. "Représentation des points de vues multiples dans une situation d'urgence : une modélisation par organisations d'agents." Le Havre, 1999. http://www.theses.fr/1999LEHA0005.
Full textIvaldi, William. "Synthèse de vue frontale et modélisation 3D de visages par vision multi-caméras." Paris 6, 2007. http://www.theses.fr/2007PA066221.
Full textThe purpose of this thesis prepared with SAGEM Sécurité, worldwide biometry leader, is to study a real-time system of facial reconstruction for a face recognition application. The algorithm must generate a frontal view of an unknown face, without constraint when walking behind the 4 video cameras. After testing a classic stereovision approach, we evaluate the well know AAM models but their deformation mode doesn't give any acceptable convergence on unknown faces. We then define an original 3D Radial Model by hemispheric projection of 3D face scans to get deformation modes adapted to the constraints. After the 3D radial model is ajusted, the frontal view is obtained by fusion of the 4 source images using a visibility rule applied at every position of the model surface. This virtual view is computed from a frontal point of view depending of the 'face' normal of the model. The virtual view synthesis is performed in real time using the graphic card GPU ressources
Ismael, Muhannad. "Reconstruction de scène dynamique à partir de plusieurs vidéos mono- et multi-scopiques par hybridation de méthodes « silhouettes » et « multi-stéréovision »." Thesis, Reims, 2016. http://www.theses.fr/2016REIMS021/document.
Full textAccurate reconstruction of a 3D scene from multiple cameras offers 3D synthetic content tobe used in many applications such as entertainment, TV, and cinema production. This thesisis placed in the context of the RECOVER3D collaborative project, which aims is to provideefficient and quality innovative solutions to 3D acquisition of actors. The RECOVER3Dacquisition system is composed of several tens of synchronized cameras scattered aroundthe observed scene within a chromakey studio in order to build the visual hull, with severalgroups laid as multiscopic units dedicated to multi-baseline stereovision. A multiscopic unitis defined as a set of aligned and evenly distributed cameras. This thesis proposes a novelframework for multi-view 3D reconstruction relying on both multi-baseline stereovision andvisual hull. This method’s inputs are a visual hull and several sets of multi-baseline views.For each such view set, a multi-baseline stereovision method yields a surface which is usedto carve the visual hull. Carved visual hulls from different view sets are then fused iterativelyto deliver the intended 3D model. Furthermore, we propose a framework for multi-baselinestereo-vision which provides upon the Disparity Space (DS), a materiality map expressingthe probability for 3D sample points to lie on a visible surface. The results confirm i) theefficient of using the materiality map to deal with commonly occurring problems in multibaselinestereovision in particular for semi or partially occluded regions, ii) the benefit ofmerging visual hull and multi-baseline stereovision methods to produce 3D objects modelswith high precision
Conze, Pierre-Henri. "Estimation de mouvement dense long-terme et évaluation de qualité de la synthèse de vues. Application à la coopération stéréo-mouvement." Phd thesis, INSA de Rennes, 2014. http://tel.archives-ouvertes.fr/tel-00992940.
Full textDekker, Lenneke. "Frome : représentaion multiple et classification d'objets avec points de vue." Lille 1, 1994. http://www.theses.fr/1994LIL10201.
Full textCOSQUER, Ronan. "Conception d'un sondeur de canal MIMO - Caractérisation du canal de propagation d'un point de vue directionnel et doublement directionnel." Phd thesis, INSA de Rennes, 2004. http://tel.archives-ouvertes.fr/tel-00007560.
Full textPellicanò, Nicola. "Tackling pedestrian detection in large scenes with multiple views and representations." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS608/document.
Full textPedestrian detection and tracking have become important fields in Computer Vision research, due to their implications for many applications, e.g. surveillance, autonomous cars, robotics. Pedestrian detection in high density crowds is a natural extension of such research body. The ability to track each pedestrian independently in a dense crowd has multiple applications: study of human social behavior under high densities; detection of anomalies; large event infrastructure planning. On the other hand, high density crowds introduce novel problems to the detection task. First, clutter and occlusion problems are taken to the extreme, so that only heads are visible, and they are not easily separable from the moving background. Second, heads are usually small (they have a diameter of typically less than ten pixels) and with little or no textures. This comes out from two independent constraints, the need of one camera to have a field of view as high as possible, and the need of anonymization, i.e. the pedestrians must be not identifiable because of privacy concerns.In this work we develop a complete framework in order to handle the pedestrian detection and tracking problems under the presence of the novel difficulties that they introduce, by using multiple cameras, in order to implicitly handle the high occlusion issues.As a first contribution, we propose a robust method for camera pose estimation in surveillance environments. We handle problems as high distances between cameras, large perspective variations, and scarcity of matching information, by exploiting an entire video stream to perform the calibration, in such a way that it exhibits fast convergence to a good solution. Moreover, we are concerned not only with a global fitness of the solution, but also with reaching low local errors.As a second contribution, we propose an unsupervised multiple camera detection method which exploits the visual consistency of pixels between multiple views in order to estimate the presence of a pedestrian. After a fully automatic metric registration of the scene, one is capable of jointly estimating the presence of a pedestrian and its height, allowing for the projection of detections on a common ground plane, and thus allowing for 3D tracking, which can be much more robust with respect to image space based tracking.In the third part, we study different methods in order to perform supervised pedestrian detection on single views. Specifically, we aim to build a dense pedestrian segmentation of the scene starting from spatially imprecise labeling of data, i.e. heads centers instead of full head contours, since their extraction is unfeasible in a dense crowd. Most notably, deep architectures for semantic segmentation are studied and adapted to the problem of small head detection in cluttered environments.As last but not least contribution, we propose a novel framework in order to perform efficient information fusion in 2D spaces. The final aim is to perform multiple sensor fusion (supervised detectors on each view, and an unsupervised detector on multiple views) at ground plane level, that is, thus, our discernment frame. Since the space complexity of such discernment frame is very large, we propose an efficient compound hypothesis representation which has been shown to be invariant to the scale of the search space. Through such representation, we are capable of defining efficient basic operators and combination rules of Belief Function Theory. Furthermore, we propose a complementary graph based description of the relationships between compound hypotheses (i.e. intersections and inclusion), in order to perform efficient algorithms for, e.g. high level decision making.Finally, we demonstrate our information fusion approach both at a spatial level, i.e. between detectors of different natures, and at a temporal level, by performing evidential tracking of pedestrians on real large scale scenes in sparse and dense conditions
Ribière, Myriam. "Representation et gestion de multiples points de vue dans le formalisme des graphes conceptuels." Nice, 1999. http://www.theses.fr/1999NICE5289.
Full textDubois, Jérôme. "Homogénéisation dynamique de milieux aléatoires en vue du dimensionnement de métamatériaux acoustiques." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14512/document.
Full textMetamaterials are promising media for acoustic imaging. For example, such media give the possibility to build flat lenses exhibiting sub-diffraction-limit resolution, thereby improving imaging setup. Despite the growing interest of the researcher for metamaterials, acoustic wave propagation is still not widely known. This work addresses the topic of wave propagation in metamaterials. In this work, we have defined a criterion which differentiate metamaterial from classical material and provide a new insight in the amplification of evanescent waves.We explore how to design metamaterials with random media. We focus on two dimensional media with fluid components. A validation process of existing dynamic homogenization techniques is done via the comparison between the responses of a screen of scatterers obtained by numerical simulations from FDTD with those predict by the analytical models. The study of those models, useful for designing random media with atypical responses, lead us to consider their quasi-static limit. In this context, we propose a homogenization technique which includes explicitly the interactions between scatterers. It is developed for multiple and simple scattering and link the effective properties to the averages of the acoustic fields in a representative volume.Finally, the analysis of the acoustic responses of a realistic random medium having theoretical negative refraction frequency bandwidth, thanks to low frequency resonant scatterers is done. Different atypical responses are identified from the numerical simulations. The comparison between the responses of this medium and those of phononic crystals is presented and shows a surprising similarity of the two arrangements
Femmam, Chafika. "Approche des systèmes graphiques et focalisation sur FLM/FLE : méthodologie à angles de vue multiples." Besançon, 2006. http://www.theses.fr/2006BESA1015.
Full textThe thesis is about writing conceived of as a graphic system. A varied methodology is adopted in order to focus on problems relative to writing and on how writing is taught in the classroom. The subject is first approached historically in an attempt to reconstruct a definition of writing limited to that of a simple notation of sounds. Dating the invention of writing back 35000 years, in contrast to the conventional date that many specialists set at 35000B. C. , we have reconsidered prehistoric traces as a first graphic expression and a determining step toward modern written forms. From here we have shown that a writing is a mainly of iconic origin, a space constructed intelligently that the eye interrogates so as to attribute to it a significance independent of the phonic representation. Wanting to verify whether this new conception had influenced how writing is taught, we looked to two methods of experimentation : both questionnaires sent to primary school teachers working in the first year of scriptural teaching/learning, and also observation of classes involved in this activity. The publics concerned are : 1) French primary school teachers and their first year classes (A. F. ) for Arabic as a mother tongue, and fourth year classes (4ème A. F. ) for French as a foreign language. The result show that, scholastically, writing is still looked upon in its most restrictive conception linking it closely to the oral language, and that the exercises which prepare students to appropriate in have no really evolved for nearly a century. To remedy this situation we think that, without neglecting the oral, writing must first be liberated from it. In other words, it is indispensable to conceive of innovative scriptural exercises that are less constraining and more efficient, all the while taking into account the true and complex nature of writing
Morgand, Alexandre. "Un modèle géométrique multi-vues des taches spéculaires basé sur les quadriques avec application en réalité augmentée." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC078/document.
Full textAugmented Reality (AR) consists in inserting virtual elements in a real scene, observed through a screen or a projection system on the scene or the object of interest. The augmented reality systems can take different forms to obtain a balance between three criteria: precision, latency and robustness. It is possible to identify three main components to these systems: localization, reconstruction and display. The contributions of this thesis focus essentially on the display and more particularly the rendering of augmented reality applications. Contrary to the recent advances in the field of localization and reconstruction, the insertion of virtual elements in a plausible and aesthetic way remains a complicated problematic, ill-posed and not adapted to a real-time context. Indeed, this insertion requires a good understanding of the lighting conditions of the scene. The lighting conditions of the scene can be divided in several categories. First, we can model the environment to describe the interaction between the incident and reflected light pour each 3D point of a surface. Secondly, it is also possible to explicitly the environment by computing the position of the light sources, their type (desktop lamps, fluorescent lamp, light bulb, . . . ), their intensities and their colors. Finally, to insert a virtual object in a coherent and realistic way, it is essential to have the knowledge of the surface’s geometry, its chemical composition (material) and its color. For all of these aspects, the reconstruction of the illumination is difficult because it is really complex to isolate the illumination without prior knowledge of the geometry, material of the scene and the camera pose observing the scene. In general, on a surface, a light source leaves several traces such as shadows, created from the occultation of light rays by an object, and the specularities (or specular reflections) which are created by the partial or total reflection of the light. These specularities are often described as very high intensity elements in the image. Although these specularities are often considered as outliers for applications such as camera localization, reconstruction or segmentation, these elements give crucial information on the position and color of the light source but also on the surface’s geometry and the material’s reflectance where these specularities appear. To address the light modeling problem, we focused, in this thesis, on the study of specularities and on every information that they can provide for the understanding of the scene. More specifically, we know that a specularity is defined as the reflection of the light source on a shiny surface. From this statement, we have explored the possibility to consider the specularity as the image created from the projection of a 3D object in space.We started from the simple but little studied in the literature observation that specularities present an elliptic shape when they appear on a planar surface. From this hypothesis, can we consider the existence of a 3D object fixed in space such as its perspective projection in the image fit the shape of the specularity ? We know that an ellipsoid projected perspectivally gives an ellipse. Considering the specularity as a geometric phenomenon presents various advantages. First, the reconstruction of a 3D object and more specifically of an ellipsoid, has been the subject to many publications in the state of the art. Secondly, this modeling allows a great flexibility on the tracking of the state of the specularity and more specifically the light source. Indeed, if the light is turning off, it is easy to visualize in the image if the specularity disappears if we know the contour (and reciprocally of the light is turning on again). (...)
Kachkouch, Fatima Zahraa. "Développement de méthodes ultrasonores en vue de la caractérisation des milieux à porosité multiple." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMLH17/document.
Full textThe aim of this thesis is to study the propagation of acoustic waves in a double porosity medium. The analytical characterization of the vibration modes of a double porosity plate has been studied. The theoretical study was validated by experimental ultrasonic measurements, on two granular materials with double porosity and single porosity glass beads, by the determination of a theory-experiment comparison coefficient. These measurements also allowed the detection of the four waves propagating in a double porosity medium. An experimental device has been developed for the purpose of characterizing the clogging phenomenon which affects the porous media when traversed by a fluid loaded with suspended particles. This phenomenon affects for example the filters used in the clarification of dirty waters. The non-destructive ultrasonic method was combined with the destructive method generally used in laboratories, for monitoring the deposition in time and space of the three porous media subjected to the injection of turbid solutions for a long time. Correlations between the acoustic properties (phase velocity and energy of the transmitted signal) and the variation of the porosity of the medium as a consequence of the deposition are obtained. All results show a good agreement between the two methods
Signol, François. "Estimation de fréquences fondamentales multiples en vue de la séparation de signaux de parole mélangés dans un même canal." Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00618687.
Full textPigache, François. "Modélisation causale en vue de la commande d'un translateur piézoélectrique plan pour une application haptique." Lille 1, 2005. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2005/50376-2005-Pigache.pdf.
Full textBedetti, Thomas. "Etude de la diffusion multiple des ultrasons en vue de la modélisation du bruit de structure et de la caractérisation des aciers moulés inoxydables à gros grains." Paris 7, 2012. http://www.theses.fr/2012PA077227.
Full textThis thesis presents the study of ultrasonic multiple scattering in media with a complex microstructure. In the context of Non-Destructive Testing (NDT), our media of interest consisted of forged and molded stainless steel samples often used in nuclear industry. An experimental study has first been conducted using linear phased arrays, by measuring the inter-elements response matrix in a wide frequency band (1 MHz to 12 MHz). Values of characteristic parameters of multiple scattering have been deduced by post-processing this matrix (elastic mean free path, correlation distance). In addition, the coherent backscattering effect that can appear in those steels has been highlighted and studied. By exploiting this phenomenon, the diffusion constant D has been measured. Then, a structural noise simulation method has been developed. It is based on the diffusion approximation and a numerical algorithm that generate random correlated noise. The experimentally measured parameters have been used as entries of the method. Accounting for the NDT domain, this method takes the influence of the transducers into account, at emission and reception. Comparisons have been conducted between simulated and experimental noise levels. Results have shown a good agreement when the diffusion regime is established
Carré, Bernard. "Méthodologie orientée objet pour la représentation des connaissances : concepts de point de vue, de représentation multiple et évolutive d'objet." Lille 1, 1989. http://www.theses.fr/1989LIL10018.
Full textPigache, Francois. "Modélisation Causale en vue de la Commande d'un translateur piézoélectrique plan pour une application haptique." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2005. http://tel.archives-ouvertes.fr/tel-00011938.
Full textMichoud, Brice. "Reconstruction 3D à partir de séquences vidéo pour l’acquisition du mouvement de personnages en temps réel et sans marqueur." Thesis, Lyon 1, 2009. http://www.theses.fr/2009LYO10156/document.
Full textWe aim at automatically capturing 3D motion of persons without markers. To make it flexible, and to consider interactive applications, we address real-time solution, without specialized instrumentation. Real-time body estimation and shape analyze lead to home motion capture application. We begin by addressing the problem of 3D real-time reconstruction of moving objects from multiple views. Existing approaches often involve complex computation methods, making them incompatible with real-time constraints. Shape-From-Silhouette (SFS) approaches provide interesting compromise between algorithm efficiency and accuracy. They estimate 3D objects from their silhouettes in each camera. However they require constrained environments and cameras placement. The works presented in this document generalize the use of SFS approaches to uncontrolled environments. The main methods of marker-less motion capture, are based on parametric modeling of the human body. The acquisition of movement goal is to determine the parameters that provide the best correlation between the model and the 3D reconstruction.The following approaches, more robust, use natural markings of the body extremities: the skin. Coupled with a temporal Kalman filter, a registration of simple geometric objects, or an ellipsoids' decomposition, we have proposed two real-time approaches, providing a mean error of 6%. Thanks to the approach robustness, it allows the simultaneous monitoring of several people even in contacts. The results obtained open up prospects for a transfer to home applications
Kalaji, Mohamed Nader. "Élaboration d’un système de libération contrôlée des facteurs de croissance FGF-2 et TGF-β1 en vue de leur utilisation en odontologie conservatrice et endodontie." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10224/document.
Full textThe purpose of this work was to investigate the effect of FGF 2 and TGF-β1 on the early steps of dentin regeneration using microencapsulation of theses factors into a microparticles matrix to ensure growth factors protection and to provide bioactive sustained release in contact with dental pulp cells and then the application of the obtained microparticles in direct pulp capping using a culture model of entire tooth. This work involves the optimization of technical means used to achieve encapsulation of TGFβ1, FGF-2 using the poly (lactic-glycolic acid) PLGA. Physicochemical and colloidal characterization of microspheres shows that the microparticles retain their physicochemical characteristics after drying and re-suspended in water. The double emulsion method was used to separately encapsulate (FGF-2) and (TGFβ1). Microparticles morphology, loading, shelf life, potential toxicity and release kinetics were studied. Then the proliferation of dental pulp cells was examined in contact with microparticles. Biological studies show no toxic effect of particles on pulp fibroblasts. Growth factors have kept their specific biological activity. A culture model of human entire tooth was used to achieve the application of microparticles as a dental direct capping material to confirm their biological activities ex vivo. These microparticles can be useful in studies of early steps of dentin regeneration, activation and migration of progenitor cells in dental pulp
Bedetti, Thomas. "Étude de la diffusion multiple des ultrasons en vue de la modélisation du bruit de structure et de la caractérisation des aciers moulés inoxydables à gros grains." Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://pastel.archives-ouvertes.fr/pastel-00781177.
Full textLAGE, NOGUEIRA JOSE WILSON. "Faisabilite technico-economique d'un systeme solaire complexe constitue d'un bassin solaire et d'un distillateur a multiples effets en vue de la production simultanee de sel et d'eau distillee." Nice, 1991. http://www.theses.fr/1991NICE4517.
Full textBuchheit, Pauline. "Le recueil de multiples finalités de l'environnement en amont d'un diagnostic de vulnérabilité et de résilience : Application à un bassin versant au Laos." Electronic Thesis or Diss., Paris, Institut agronomique, vétérinaire et forestier de France, 2016. http://www.theses.fr/2016IAVF0008.
Full textLao PDR is a landlocked country with low population density, which stands nowadays in a process of regional economic integration, after suffering wars related to decolonization and Cold War during several decades. A very fast economic growth, based on the development of infrastructures of transport and natural resource exploitation, has led to large differentiated impacts on populations and their resource based livelihoods. The concepts of resilience and vulnerability have been used in different disciplines to analyze and manage the dynamics of geographical areas and social groups facing rapid and uncertain changes. Both concepts are used within a variety of frameworks of analysis of society-environment relationships. While all reviewed frameworks take into account multiple scales of analysis in order to tackle the complexity of the studied phenomena, they do not, however, assess vulnerability and resilience at the same scales. In particular, some frameworks are actor-centered, while others are system-centered. The scale and limit of the socioecological system whose resilience or vulnerability is assessed depend on the issues that the authors want to tackle. Before such an assessment, it seems necessary to identify the issues of resilience and vulnerability that we want to address. This task should not be taken over by scientists alone, but by other stakeholders as well. The question is: how can we incorporate multiple viewpoints in the system design? For this, our framework considers a socioecological system both as a specific representation of the environment offered by a stakeholder, and as a set of elements contributing to one function. This system is organized in a hierarchy of levels of observation, in which each level corresponds to an intermediary function. We developed and tested a process to collect system representations of the environment from various stakeholders, that is to say, the way they structure a socioecological system that makes sense to them, according to the purposes that they assign to their environment. This approach has been tested in the catchment area of the Nam Lik river, Fuang district, Vientiane province, where the Nam Lik 1-2 hydropower dam was built in 2010. A series of workshops were held with residents of the study area, employees of local government and Lao National University teachers. At the earliest stage of a vulnerability or resilience assessment in the field study, this thesis proposes a reflection on the possible framings of these concepts, as well as methods to collect them from multiple stakeholders
Kalaji, Mohamed Nader. "Élaboration d'un système de libération contrôlée des facteurs de croissance FGF-2 et TGF-β1 en vue de leur utilisation en odontologie conservatrice et endodontie." Phd thesis, Université Claude Bernard - Lyon I, 2010. http://tel.archives-ouvertes.fr/tel-00881178.
Full textMarchant, Thierry. "Agrégation de relations valuées par la méthode de Borda, en vue d'un rangement: considérations axiomatiques." Doctoral thesis, Universite Libre de Bruxelles, 1996. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212380.
Full textDepuis 20 à 30 ans, l'aide multicritère à la décision est apparue. L'expansion de cette nouvelle discipline s'est marquée dans la littérature essentiellement par un foisonnement de nouvelles méthodes multicritères d'aide à la décision et par des applications de celles-ci à des problèmes "réels". Pour la plupart de ces méthodes, il n'y pas ou peu de fondements théoriques. Seul le bon sens a guidé les créateurs de ces méthodes.
Depuis une dizaine d'années, le besoin de bases théoriques solides se fait de plus en plus sentir. C'est dans cette perspective que nous avons réalisé le présent travail. Ceci étant dit, nous n'allons pas vraiment nous occuper de méthodes multicritères à la décision dans ce travail, mais seulement de fragments de méthodes. En effet, les méthodes multicritères d'aide à la décision peuvent généralement être décomposées en trois parties (outre la définition de l'ensemble des alternatives et le choix des critères):
- Modélisation des préférences: pendant cette étape, les préférences du décideur sont modélisées le long de chaque critère.
- Agrégation des préférences: un modèle global de préférences est construit au départ des modèles obtenus critère par critère à la fin de la phase précédente.
- Exploitation des résultats de l'agrégation: du modèle global de préférences issu de la phase 2, on déduit un choix, un rangement, une partition, selon les besoins.
Jusqu'à présent, à cause de la difficulté du problème, peu de méthodes ont été axiomatisées de bout en bout; la plupart des travaux ne s'intéressent qu'à une ou deux des trois étapes que nous venons de décrire.
Nous nous sommes intéressés à une méthode bien connue: la méthode de Borda. Elle accepte comme données de départ des relations binaires. Elle intervient donc après la phase de modélisation des préférences. Le résultat de cette méthode est un rangement. Elle effectue donc les opérations correspondant aux étapes 2 et 3. Dans la suite de ce travail nous appellerons méthode de rangement toute méthode effectuant les étapes 2 et 3 pour aboutir à un rangement. Etant donné que les méthodes de rangement, celle de Borda en particulier, sont utilisées également en choix social, nous puiserons abondamment dans le vocabulaire, les outils et les résultats du choix social. Les résultats présentés seront valides en choix social, mais nous nous sommes efforcés de les rendre aussi pertinents que possible en aide multicritère à la décision.
Dans le chapitre II, après quelques définitions et notations, nous présentons quelques méthodes de rangement classiques, y compris la méthode de Borda, et quelques résultats majeurs de la littérature. Nous généralisons une caractérisation des méthodes de scorage due à Myerson (1995).
Nous nous tournons ensuite vers les relations valuées. La raison en est la suivante: elles sont utilisées depuis longtemps dans plusieurs méthodes multicritères et, depuis peu, elles le sont aussi en choix social (p.ex. Banerjec 1994) car elles permettent de modéliser plus finement les préférences des décideurs confrontés à des informations incertaines, imprécises, contradictoires, lacunaires, Nous commençons donc le chapitre III par des notations et définitions relatives aux relations valuées.
Ensuite, nous présentons quelques méthodes de rangement opérant au départ de relations valuées. C'est-à-dire des méthodes de rangement qui agissent non pas sur des relations nettes, mais sur des relations valuées et qui fournissent comme précédemment un rangement des alternatives. N'ayant trouvé dans la littérature aucune méthode de ce type, toutes celles que nous présentons sont neuves ou des généralisations de méthodes existantes; comme par exemple, les méthodes de scorage généralisées, que nous caractérisons en généralisant encore une fois le résultat de Myerson.
Nous présentons enfin ce que nous appelons la méthode de Borda généralisée, qui est une des généralisations possibles de la méthode de Borda au cas valué. Nous basant sur un article de Farkas et Nitzan (1979), nous montrons que contrairement à ce qui se passait dans le cas particulier envisagé par Farkas et Nitzan (agrégation d'ordres totaux), la méthode de Borda généralisée (et sa particularisation au cas net) n'est pas toujours équivalente à la méthode proximité à l'unanimité. Cette dernière méthode classe chaque alternative en fonction de l'importance des changements qu'il faudrait faire subir à un ensemble de relations pour que l’alternative considérée gagne à l'unanimité. Nous identifions quelques cas où l'équivalence est vraie.
Ensuite, nous reprenons un résultat de Debord (1987). Il s'agit d'une caractérisation de la méthode de Borda en tant que méthode de choix appliquée à des préordres totaux. Nous la généralisons de deux façons au cas de la méthode de Borda en tant que méthode de rangement appliquée à des relations valuées. Lorsqu'on applique la méthode de Borda, on est amené à calculer une fonction à valeurs réelles sur l'ensemble des alternatives.
La valeur prise par cette fonction pour une alternative s'appelle le score de Borda de cette alternative. Ensuite, on range les alternatives par ordre décroissant de leur score de Borda. La tentation est grande - et beaucoup y succombent (peut-être avec raison) d'utiliser le score de Borda non seulement pour calculer le rangement mais aussi pour estimer si l'écart entre deux alternatives est important ou non (voir par exemple Brans 1994). Cette approche n'a, à notre connaissance, jamais été étudiée d'un point de vue théorique. Nous présentons deux caractérisations de la méthode de Borda utilisée à cette fin.
Dans la dernière partie du chapitre III, nous abandonnons la démarche qui visait à caractériser une méthode par un ensemble de propriétés le plus petit possible. Nous comparons 12 méthodes sur base d'une vingtaine de propriétés. Les résultats de cette partie sont résumés dans quelques tableaux.
Ce travail aborde donc la méthode de Borda et sa généralisation au cas valué sous différents angles. Il livre une série de résultats qui, espérons-le, devraient permettre de mieux comprendre la méthode de Borda et peut-être de l'utiliser à meilleur escient. Toutefois, quoique notre objectif ait été de présenter des résultats pertinents en aide multicritère à la décision (et nous avons fait des progrès dans ce sens), il reste du chemin à faire. Nous sommes probablement encore trop proche du choix social. Ceci constitue donc une voie de recherche intéressante, de même que l'étude d'autres méthodes de rangement et l'étude de méthodes complètes d'aide multicritère à la décision: modélisation du problème (identification du ou des décideur(s), des alternatives et des critères), modélisation des préférences, agrégation des préférences et exploitation des résultats de l'agrégation.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Hatanaka, Roxanne. "Rastreamento de variantes de significado desconhecido (VUS) no gene RET em indivíduos-controle e em pacientes com carcinoma medular de tireoide." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/5/5135/tde-24022016-111259/.
Full textIntroduction: Multiple endocrine neoplasia type 2 (MEN-2) is a tumor syndrome with autosomal dominant inheritance, in which tumors are associated with medullary thyroid carcinoma (MTC), pheochromocytoma (FEO) and primary hyperparathyroidism (HPT). This syndrome occurs due to activating mutations in the RET proto-oncogene, which lead to constitutive activation of tyrosine kinase signaling pathways that deregulate the cell cycle. According to the International Consensus on MTC/MEN-2 of 2001 and 2009 one should recommend that RET mutation carriers, including asymptomatic individuals, should undergo prophylactic total thyroidectomy (TT), increasing the chance of cure of the disease. It is not recommended clinical screening in patients that show only isolated polymorphisms (non-pathogenic variant). However, there are individuals who carry genetic variants of unknown clinical significance (VUS), generating doubt about the best clinical management. Currently, there is no consistent knowledge whether these variants may or may not be involved with the increased risk to MTC. The present project has approached the several aspects of these VUS, such as the allele frequency, in silico pathogenic prediction, published data and public databases, in order to increase our knowledge about VUS, in an attempt to contribute by offering appropriate clinical management to VUS carriers. Objective: To expand the knowledge of the pathogenic potential of some of the VUS of the RET gene, focusing especially on the controversial genetic variant p.Y791F. Methods: We performed the mutation screening of hotspots exons of the RET gene of DNA samples of 2061 adult/elderly healthy individuals and of patients with CMT by Sanger sequencing and Next Generation Sequencing (NGS) techniques. Pathogenic predictions of the studied variants were generated using six genetic softwares. Allelic frequency of RET VUS was assessed in different public databanks. Results: Genetic screening of control samples identified the presence of p.Y791N, p.Y791F and p.E511K germline variants. Patients with MTC carrying p.V648I and p.K666N germline variants were localized and family members were screened and clinically investigated. In addition, a new case with pheochromocytoma was found to carry the p.Y791F germline variant. The in silico analyses showed that 4 out of 6 packages were more informative, suggesting physico-chemical structure alteration caused by 25 out of 48 RET VUS. Very low allele frequencies were found in the public databases including healthy individuals and tumor samples. In vitro studies have been performed only for 15 out of 48 RET VUS. Conclusion: Our data strongly suggest that the p.Y791F variant, when occurring in an isolated form, is a benign polymorphism not associated with increased risk of MTC. Conversely, its co-occurrence with bona fide RET mutations as C634Y may lead to modulation of the phenotype, as increasing the frequencies of large and bilateral pheochromocytomas in MEN2A families. Family members carrying the p.V648I variant isolate have been followed clinically for approximately 15 years. As no indication of MCT, pheochromocytoma or hyperparathyroidism development has been documented, we conclude that this variant is a rare RET benign polymorphism. More information is needed to a better characterization of other VUS as E511K, K666N and Y791N. Thus, carriers with these variants should be necessarily examined through a periodic clinical follow up
Moreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.
Full textMissing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
Langelier, Guillaume. "Intégration de la visualisation à multiples vues pour le développement du logiciel." Thèse, 2010. http://hdl.handle.net/1866/5011.
Full textNowadays, software development has to deal more and more with huge complex programs, constructed and maintained by large teams working in different locations. During their daily tasks, each developer may have to answer varied questions using information coming from different sources. In order to improve global performance during software development, we propose to integrate into a popular integrated development environment (Eclipse) our new visualization tool (VERSO), which computes, organizes, displays and allows navigation through information in a coherent, effective, and intuitive way in order to benefit from the human visual system when exploring complex data. We propose to structure information along three axes: (1) context (quality, version control, etc.) determines the type of information; (2) granularity level (code line, method, class, and package) determines the appropriate level of detail; and (3) evolution extracts information from the desired software version. Each software view corresponds to a discrete coordinate according to these three axes. Coherence is maintained by navigating only between adjacent views, which reduces cognitive effort as users search information to answer their questions. Two experiments involving representative tasks have validated the utility of our integrated approach. The results lead us to believe that an access to varied information represented graphically and coherently should be highly beneficial to the development of modern software.