Tesis sobre el tema "DOM (Document Object Model)"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 33 mejores tesis para su investigación sobre el tema "DOM (Document Object Model)".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Kocman, Radim. "Podpora dynamického DOM v zobrazovacím stroji HTML". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236139.
Texto completoŠušlík, Martin. "Knihovna pro zpracování dokumentů RTF". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237231.
Texto completoT, Kovács Gregor. "Prostředky pro implementaci rozložení webových stránek v JavaScriptu". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-412528.
Texto completoAli, Imtiaz. "Object Detection in Dynamic Background". Thesis, Lyon 2, 2012. http://www.theses.fr/2012LYO20008/document.
Texto completoMoving object detection is one of the main challenges in many video monitoring applications.In this thesis, we address the difficult problem that consists in object segmentation when background moves permanently. Such situations occur when the background contains water flow, smoke or flames, snowfall, rainfall etc. Object detection in moving background was not studied much in the literature so far. Video backgrounds studied in the literature are often composed of static scenes or only contain a small portion of moving regions (for example, fluttering leaves or brightness changes). The main difficulty when we study such situations is to differentiate the objects movements and the background movements that may be almost similar. For example, an object in river moves at the same speed as water. Therefore, motion-based techniques of the literature, relying on displacements vectors in the scene, may fail to discriminate objects from the background, thus generating a lot of false detections. In this complex context, we propose some solutions for object detection.Object segmentation can be based on different criteria including color, texture, shape and motion. We propose various methods taking into account one or more of these criteria.We first work on the specific context of wood detection in rivers. It is a part of DADEC project (Détection Automatique de Débris pour l’Aide à l’Etude des Crues) in collaboration with geographers. We propose two approaches for wood detection: a naïve method and the probabilistic image model. The naïve approach is based on binary decisions based on object color and motion, whereas the probabilistic image model uses wood intensity distribution with pixel motion. Such detection methods are used fortracking and counting pieces of wood in rivers.Secondly, we consider a context in which we suppose a priori knowledge about objectmotion is available. Hence, we propose to model and incorporate this knowledge into the detection process. We show that combining this prior motion knowledge with classical background model improves object detection rate.Finally, drawing our inspiration from methods used for 2D texture representation, we propose to model moving backgrounds using a frequency-based approach. More precisely, the model takes into account the spatial neighborhoods of pixels but also their temporal neighborhoods. We apply local Fourier transform on the obtained regions in order to extract spatiotemporal color patterns.We apply our methods on multiple videos, including river videos under DADEC project, image sequences from the DynTex video database, several synthetic videos andsome of our own made videos. We compare our object detection results with the existing methods for real and synthetic videos quantitatively as well as qualitatively
Selim, Hossam Abdelatif Mohamed. "A novel secure autonomous generalized document model using object oriented technique". Thesis, University of Kent, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269141.
Texto completoEklund, Henrik. "OBJECT DETECTION: MODEL COMPARISON ON AUTOMATED DOCUMENT CONTENT INTERPRETATION - A performance evaluation of common object detection models". Thesis, Umeå universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-164767.
Texto completoManfredi, Guido. "Learning objects model and context for recognition and localisation". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30386/document.
Texto completoThis Thesis addresses the modeling, recognition, localization and use of context for objects manipulation by a robot. We start by presenting the modeling process and its components: the real system, the sensors' data, the properties to reproduce and the model. We show how, by specifying each of them, one can define a modeling process adapted to the problem at hand, namely object manipulation by a robot. This analysis leads us to the adoption of local textured descriptors for object modeling. Modeling with local textured descriptors is not a new concept, it is the subject of many Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) works. Existing methods include bundler, roboearth modeler and 123DCatch. Still, no method has gained widespread adoption. By implementing a similar approach, we show that they are hard to use even for expert users and produce highly complex models. Such complex techniques are necessary to guaranty the robustness of the model to view point change. There are two ways to handle the problem: the multiple views paradigm and the robust features paradigm. The multiple views paradigm advocate in favor of using a large number of views of the object. The robust feature paradigm relies on robust features able to resist large view point changes. We present a set of experiments to provide an insight into the right balance between both. By varying the number of views and using different features we show that small and fast models can provide robustness to view point changes up to bounded blind spots which can be handled by robotic means. We propose four different methods to build simple models from images only, with as little a priori information as possible. The first one applies to planar or piecewise planar objects and relies on homographies for localization. The second approach is applicable to objects with simple geometry, such as cylinders or spheres, but requires many measures on the object. The third method requires the use of a calibrated 3D sensor but no additional information. The fourth technique doesn't need a priori information at all. We apply this last method to autonomous grocery objects modeling. From images automatically retrieved from a grocery store website, we build a model which allows recognition and localization for tracking. Even using light models, real situations ask for numerous object models to be stored and processed. This poses the problems of complexity, processing multiple models quickly, and ambiguity, distinguishing similar objects. We propose to solve both problems by using contextual information. Contextual information is any information helping the recognition which is not directly provided by sensors. We focus on two contextual cues: the place and the surrounding objects. Some objects are mainly found in some particular places. By knowing the current place, one can restrict the number of possible identities for a given object. We propose a method to autonomously explore a previously labeled environment and establish a correspondence between objects and places. Then this information can be used in a cascade combining simple visual descriptors and context. This experiment shows that, for some objects, recognition can be achieved with as few as two simple features and the location as context. The objects surrounding a given object can also be used as context. Objects like a keyboard, a mouse and a monitor are often close together. We use qualitative spatial descriptors to describe the position of objects with respect to their neighbors. Using a Markov Logic Network, we learn patterns in objects disposition. This information can then be used to recognize an object when surrounding objects are already identified. This Thesis stresses the good match between robotics, context and objects recognition
Tuong, Frédéric. "Constructing Semantically Sound Object-Logics for UML/OCL Based Domain-Specific Languages". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS085/document.
Texto completoObject-based and object-oriented specification languages (likeUML/OCL, JML, Spec#, or Eiffel) allow for the creation and destruction, casting and test for dynamic types of statically typed objects. On this basis, class invariants and operation contracts can be expressed; the latter represent the key elements of object-oriented specifications. A formal semantics of object-oriented data structures is complex: imprecise descriptions can often imply different interpretations in resulting tools. In this thesis we demonstrate how to turn a modern proof environment into a meta-tool for definition and analysis of formal semantics of object-oriented specification languages. Given a representation of a particular language embedded in Isabelle/HOL, we build for this language an extended Isabelle environment by using a particular method of code generation, which actually involves several variants of code generation. The result supports the asynchronous editing, type-checking, and formal deduction activities, all "inherited" from Isabelle. Following this method, we obtain an object-oriented modelling tool for textual UML/OCL. We also integrate certain idioms not necessarily present in UML/OCL --- in other words, we develop support for domain-specific dialects of UML/OCL. As a meta construction, we define a meta-model of a part of UML/OCL in HOL, a meta-model of a part of the Isabelle API in HOL, and a translation function between both in HOL. The meta-tool will then exploit two kinds of code generation to produce either fairly efficient code, or fairly readable code. Thus, this provides two animation modes to inspect in more detail the semantics of a language being embedded: by loading at a native speed its semantics, or just delay at another "meta"-level the previous experimentation for another type-checking time in Isabelle, be it for performance, testing or prototyping reasons. Note that generating "fairly efficient code", and "fairly readable code" include the generation of tactic code that proves a collection of theorems forming an object-oriented datatype theory from a denotational model: given a UML/OCL class model, the proof of the relevant properties for casts, type-tests, constructors and selectors are automatically processed. This functionality is similar to the datatype theory packages in other provers of the HOL family, except that some motivations have conducted the present work to program high-level tactics in HOL itself. This work takes into account the most recent developments of the UML/OCL 2.5 standard. Therefore, all UML/OCL types including the logic types distinguish two different exception elements: invalid (exception) and null (non-existing element). This has far-reaching consequences on both the logical and algebraic properties of object-oriented data structures resulting from class models. Since our construction is reduced to a sequence of conservative theory extensions, the approach can guarantee logical soundness for the entire considered language, and provides a methodology to soundly extend domain-specific languages
Almuhisen, Feda. "Leveraging formal concept analysis and pattern mining for moving object trajectory analysis". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0738/document.
Texto completoThis dissertation presents a trajectory analysis framework, which includes both a preprocessing phase and trajectory mining process. Furthermore, the framework offers visual functions that reflect trajectory patterns evolution behavior. The originality of the mining process is to leverage frequent emergent pattern mining and formal concept analysis for moving objects trajectories. These methods detect and characterize pattern evolution behaviors bound to time in trajectory data. Three contributions are proposed: (1) a method for analyzing trajectories based on frequent formal concepts is used to detect different trajectory patterns evolution over time. These behaviors are "latent", "emerging", "decreasing", "lost" and "jumping". They characterize the dynamics of mobility related to urban spaces and time. The detected behaviors are automatically visualized on generated maps with different spatio-temporal levels to refine the analysis of mobility in a given area of the city, (2) a second trajectory analysis framework that is based on sequential concept lattice extraction is also proposed to exploit the movement direction in the evolution detection process, and (3) prediction method based on Markov chain is presented to predict the evolution behavior in the future period for a region. These three methods are evaluated on two real-world datasets. The obtained experimental results from these data show the relevance of the proposal and the utility of the generated maps
Saillenfest, Melaine. "Théories séculaires et dynamique orbitale au-delà de Neptune". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEO006/document.
Texto completoThe dynamical structure of the transneptunian region is still far from being fully understood, especially concerning high-perihelion objects. In that region, the orbital perturbations are very weak, both from inside (the planets) and from outside (passing stars and galactic tides). However, numerous objects have very eccentric orbits, which indicates that they did not form in their current orbital state. Furthermore, some intriguing clusters in the distribution of their orbital elements have attracted attention of the scientific community, leading to numerous conjectures about the origin and evolution of the external Solar System.Before thinking of "exotic" theories, an exhaustive survey has to be conducted on the different mechanisms that could produce the observed trajectories involving only what we take for granted about the Solar System dynamics, that is the orbital perturbations by the known planets and/or by galactic tides. However, we cannot rely only on numerical integrations to efficiently explore the space of possible behaviours. In that context, we aim at developing a general picture of the dynamics between Neptune and the Oort Cloud, including the most extreme (even if improbable?) orbits.The orbits entirely exterior to the planetary region can be divided into two broad classes: on the one hand, the objects undergoing a diffusion of semi-major axis (which prevents from large variation of the perihelion distance); on the other hand, the objects which present an integrable (or quasi-integrable) dynamics on a short time-scale. The dynamics of the latter can be described by secular models. There are two kinds of regular orbits: the non-resonant ones (fixed semi-major axis) and those trapped in a mean-motion resonance with a planet (oscillating semi-major axis).The major part of this Ph.D. work is focussed on the development of secular models for transneptunian objects, both in the non-resonant and resonant cases. One-degree-of-freedom systems can be obtained, which allows to represent any trajectory by a level curve of the Hamiltonian. Such a formalism is pretty efficient to explore the parameter space. It reveals pathways to high perihelion distances, as well as "trapping mechanisms", able to maintain the objects on very distant orbits for billion years. The application of the resonant secular model to the known objects is also very informative, since it shows graphically which observed orbits require a complex scenario (as the planetary migration or an external perturber), and which ones can be explained by the influence of the known planets. In this last case, the dynamical history of the small bodies can be tracked back to the resonance capture.The last part of this work is devoted to the extension of the non-resonant secular model to the case of an external massive perturber. If it has a substantial eccentricity and/or inclination, it introduces one or two more degrees of freedom in the system, so the secular dynamics is non integrable in general. In that case, the analysis can be realised by Poincaré sections, which allow to distinguish the chaotic regions of the phase space from the regular ones. For increasing semi-major axes, the chaos spreads very fast. The most persistent structures are secular resonances producing trajectories aligned or anti-aligned with the orbit of the distant planet
Li, Zongcheng. "Conceptual design of shapes by reusing existing heterogeneous shape data through a multi-layered shape description model and for VR applications". Thesis, Paris, ENSAM, 2015. http://www.theses.fr/2015ENAM0025/document.
Texto completoDue to the great advances in acquisition devices and modeling tools, a huge amount of digital data (e.g. images, videos, 3D models) is becoming now available in various application domains. In particular, virtual envi-ronments make use of those digital data allowing more attractive and more effectual communication and simula-tion of real or not (yet) existing environments and objects. Despite those innovations, the design of application-oriented virtual environment still results from a long and tedious iterative modeling and modification process that involves several actors (e.g. experts of the domain, 3D modelers and VR programmers, designers or communica-tions/marketing experts). Depending of the targeted application, the number and the profiles of the involved actors may change. Today's limitations and difficulties are mainly due to the fact there exists no strong relationships between the expert of the domain with creative ideas, the digitally skilled actors, the tools and the shape models taking part to the virtual environment development process. Actually, existing tools mainly focus on the detailed geometric definition of the shapes and are not suitable to effectively support creativity and innovation, which are considered as key elements for successful products and applications. In addition, the huge amount of available digital data is not fully exploited. Clearly, those data could be used as a source of inspiration for new solutions, being innovative ideas frequently coming from the (unforeseen) combination of existing elements. Therefore, the availability of software tools allowing the re-use and combination of such digital data would be an effective support for the conceptual design phase of both single shapes and VR environments. To answer those needs, this thesis proposes a new approach and system for the conceptual design of VRs and associated digital assets by taking existing shape resources, integrating and combining them together while keeping their semantic meanings. To support this, a Generic Shape Description Model (GSDM) is introduced. This model allows the combination of multimodal data (e.g. images and 3D meshes) according to three levels: conceptual, intermediate and data levels. The conceptual level expresses what the different parts of a shape are, and how they are combined together. Each part of a shape is defined by an Element that can either be a Component or a Group of Components when they share common characteristics (e.g. behavior, meaning). Elements are linked with Relations defined at the Concep-tual level where the experts in the domain are acting and exchanging. Each Component is then further described at the data level with its associated Geometry, Structure and potentially attached Semantics. In the proposed ap-proach, a Component is a part of an image or a part of a 3D mesh. Four types of Relation are proposed (merging, assembly, shaping and location) and decomposed in a set of Constraints which control the relative position, orien-tation and scaling of the Components within the 3D viewer. Constraints are stored at the intermediate level and are acting on Key Entities (such as points, a lines, etc.) laying on the Geometry or Structure of the Components. All these constraints are finally solved while minimizing an additional physically-based energy function. At the end, most of the concepts of GSDM have been implemented and integrated into a user-oriented conceptual design tool totally developed by the author. Different examples have been created using this tool demonstrating the potential of the approach proposed in this document
Harmon, Trev R. "On-Line Electronic Document Collaboration and Annotation". Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1589.pdf.
Texto completoKjellström, Johan y Anton Drugge. "Kan val av JavaScript-ramverk påverkaanvändarupplevelsen? : En jämförelse mellan Vue och Svelte". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-86085.
Texto completoOuarrak, Bouazza. "Les misconceptions dans la microgenèse de l’objet technique". Thesis, Paris, CNAM, 2011. http://www.theses.fr/2011CNAM0756/document.
Texto completoThis thesis investigates the cognitive resources that pupils engineers in a PBL (Problem based Learning) in a task of conception of a technical object mobilize. The situation-problem with which these pupils are confronted is constituted by an unpublished technical system of refrigeration without outside contribution of energy. In this learning, the pupils have to conceive the technical object and learn concepts in thermodynamics. Two groups of pupils are compared: the first one has an analogical model of a situation known to approach the new situation; the second has only the text. The questions of researches: what build these pupils as knowledge? What bring these two types of learning (the learning by a known situation and the learning by the text)? What are the obstacles which meet these pupils? The hypotheses: a learning by a known situation leads to the construction of operational knowledge (concepts tools). A learning by the text leads to the construction of knowledge out of context (concepts objects). A learning by the situations in a didactic device leads later to the construction of category-specific concepts. These two types of learning involve the epistemological obstacle in the construction of the concepts in them two functions: tool and object
von, Wenckstern Michael. "Web applications using the Google Web Toolkit". Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.
Texto completoDiese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
Popela, Tomáš. "Implementace algoritmu pro vizuální segmentaci www stránek". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236592.
Texto completoLaščák, Tomáš. "Algoritmy pro segmentaci webových stránek". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255360.
Texto completoRandrianarivo, Hicham. "Apprentissage statistique de classes sémantiques pour l'interprétation d'images aériennes". Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1117/document.
Texto completoThis work is about interpretation of the content of very high resolution aerial optical panchromatic images. Two methods are proposed for the classification of this kind of images. The first method aims at detecting the instances of a class of objects and the other method aims at segmenting superpixels extracted from the images using a contextual model of the relations between the superpixels. The object detection method in very high resolution images uses a mixture of appearance models of a class of objects then fuses the hypothesis returned by the models. We develop a method that clusters training samples into visual subcategories based on a two stages procedure using metadata and visual information. The clustering part allows to learn models that are specialised in recognizing a subset of the dataset and whose fusion lead to a generalization of the object detector. The performances of the method are evaluate on several dataset of very high resolution images at several resolutions and several places. The method proposed for contextual semantic segmentation use a combination of visual description of a superpixel extract from the image and contextual information gathered between a superpixel and its neighbors. The contextual representation is based on a graph where the nodes are the superpixels and the edges are the relations between two neighbors. Finally we predict the category of a superpixel using the predictions made by of the neighbors using the contextual model in order to make the prediction more reliable. We test our method on a dataset of very high resolution images
Gastebois, Jérémy. "Contribution à la commande temps réel des robots marcheurs. Application aux stratégies d'évitement des chutes". Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2315/document.
Texto completoBig walking robots are complex multi-joints mechanical systems which crystallize the human will to confer their capabilities on artefacts, one of them being the bipedal locomotion and more especially the balance keeping against external disturbances. This thesis proposes a balance stabilizer under operating conditions displayed on the locomotor system BIP 2000.This anthropomorphic robot has got fifteen electrically actuated degree of freedom and an Industrial controller. A new software has been developed with an object-oriented programming approach in order to propose the modularity required by the emulated and natural human symmetry. This consideration leads to the development of a mathematical tool allowing the computation of every modelling of a serial robot which is the sum of multiple sub robots with already known modelling. The implemented software also enables the robot to run offline generated dynamic walking trajectories and to test the balance stabilizer.We explore in this thesis the feasibility of controlling the center of gravity of a multibody robotic system with electrostatic fields acting on its virtual counterpart in order to guarantee its balance. Experimental results confirm the potential of the proposed approach
Kooli-Chaabane, Hanen. "Le transfert de technologie vu comme une dynamique des compétences technologiques : application à des projets d'innovation basés sur des substitutions technologiques par le brasage métallique". Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL075N/document.
Texto completoTechnology transfer is an innovation process far from to be defined as a simple transmitter / receiver relationship of knowledge. It is complex. Thus the determinants of its success are still poorly understood and its modeling remains to be studied to a better management and optimization of the process.This thesis proposes a descriptive modeling of the technology transfer process. The aim is to have better understanding of the dynamics of technology transfer projects, and developing best practices to improve its management.In the theoretical field, we analyzed the models of the literature and proposed a meta-model of technology transfer from the point of view of systems engineering. We then sought to better understand the phenomena in situ.In order to reach our aim, an observation methodology for data collection at the micro level has been developed. We followed five transfer projects for a period ranging from three months to two years. Two dimensions have been emphasized: the immaterial and the material dimension. The concept of Intermediate Transfer Object (ITO) is introduced from the concept of design intermediary object.The data obtained were analyzed using two approaches:- a comparative descriptive approach, identifying invariants and divergent phenomena between the five processes. This has allowed us to propose best practices for technology transfer project management in the context of brazing.- a multicriteria approach based on the rough sets theory. This approach provides useful information for understanding the process through the decision rules. It validated the importance of the technology transfer object in the dynamics and the success of a project
Allam, Diana. "Loose coupling and substitution principle in objet-oriented frameworks for web services". Thesis, Nantes, Ecole des Mines, 2014. http://www.theses.fr/2014EMNA0115/document.
Texto completoToday, the implementation of services (SOAP and RESTful models) and of client applications is increasingly based on object-oriented programming languages. Thus, object-oriented frameworks for Web services are essentially composed with two levels: an object level built over a service level. In this context, two properties could be particularly required in the specification of these frameworks: (i)First a loose coupling between the two levels, which allows the complex technical details of the service level to be hidden at the object level and the service level to be evolved with a minimal impact on the object level, (ii) Second, an interoperability induced by the substitution principle associated to subtyping in the object level, which allows to freely convert a value of a subtype into a supertype. In this thesis, first we present the existing weaknesses of object-oriented frameworks related to these two requirements. Then, we propose a new specification for object-oriented Web service frameworks in order to resolve these problems. As an application, we provide an implementation of our specification in the cxf framework, for both SOAP and RESTful models
Richa, Elie. "Qualification des générateurs de code source dans le domaine de l'avionique : le test automatisé des chaines de transformation de modèles". Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0082/document.
Texto completoIn the avionics industry, Automatic Code Generators (ACG) are increasingly used to produce parts of the embedded software. Since the generated code is part of critical software, safety standards require a thorough verification of the ACG called qualification. In this thesis in collaboration with AdaCore, we seek to reduce the cost of testing activities by automatic and effective methods.The first part of the thesis addresses the topic of unit testing which ensures exhaustiveness but is difficult to achieve for ACGs. We propose a method that guarantees the same level of exhaustiveness by using only integration tests which are easier to carry out. First, we propose a formalization of the ATL language in which the ACG is defined in the Algebraic Graph Transformation theory. We then define a translation of postconditions expressing the exhaustiveness of unit testing into equivalent preconditions that ultimately support the production of integration tests providing the same level of exhaustiveness. Finally, we propose to optimize the complex algorithm of our analysis using simplification strategies that we assess experimentally.The second part of the work addresses the oracles of ACG tests, i.e. the means of validating the code generated by the ACG during a test. We propose a language for the specification of textual constraints able to automatically check the validity of the generated code. This approach is experimentally deployed at AdaCore for a Simulink® to Ada/C ACG called QGen
Hascoët, Nicolas. "Méthodes pour l'interprétation automatique d'images en milieu urbain". Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0004/document.
Texto completoThis thesis presents a study for an automatic interpretation of urban images. We propose an application for the retrieval of different landmarks in images representing complex scenes. The main issue here is to differentiate the local information extracted from the key-points of the desired building from all the points extracted within the entire image. Indeed, an urban area image is specific by the public nature of the scene depicted. The object sought to be identified is fused within various other objects that can interfere. First of all, we present a state of the art about image recognition and retrieval methods focusing on local points of interest. Databases that can be used during the phases of experimentation are also exposed in a second chapter. We finally retain the Bag of Words modèle applied to local SIFT descriptors. In a second part, we propose a local data classification approach involving the Support Vector Machine model. The interest shown with this proposed approach is the low number of data required during the training phase of the models. Different training and classification strategies are also discussed. A third step suggests the addition of a geometric correction on the classification obtained previously. We thus obtain a classification not only for the local information but also for the visual information allowing thereby a geometric consistency of the points of interest. Finally, a last chapter presents the experimental results obtained, in particular involving images of buildings in Paris and Oxford
Huard, Benoît. "Contribution à la modélisation non-linéaire et à la commande d'un actionneur robotique intégré pour la manipulation". Thesis, Poitiers, 2013. http://www.theses.fr/2013POIT2262/document.
Texto completoThe realization of dexterous manipulation tasks requires a complexity in robotic hands design as well as in their control laws synthesis. A mecatronical optimization of these systems helps to answer for functional integration constraints by avoiding external force sensors. Back-drivable mechanics allows the free-space positioning determination of such system as far as the detection of its interaction with a manipulated object thanks to proprioceptives measures at electric actuator level. The objective of this thesis is to synthesize a control law adapted to object manipulation by taking into account these mechanical properties in a one degree-of-freedom case. The proposed method is based on a robust control according to structural non-linearities due to gravitational effects and dry frictions on the one hand and with regard to a variable rigidity of manipulated objects on the other hand. The chosen approach requires a precise knowledge of the system configuration at all time. A dynamic representation of its behavior enables a software sensor synthesis for the exteroceptives variables estimation in a control law application purpose. The different steps are experimentally validated in order to justify the chosen approach leading to object manipulation
Gavryliuk, Olga. "Nástroj pro správu dokumentů v managementu projektů". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403822.
Texto completoFertier, Audrey. "Interprétation automatique de données hétérogènes pour la modélisation de situations collaboratives : application à la gestion de crise". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2018. http://www.theses.fr/2018EMAC0009/document.
Texto completoThe present work is applied to the field of French crisis management, and specifically to the crisis response phase which follows a major event, like a flood or an industrial accident. In the aftermath of the event, crisis cells are activated to prevent and deal with the consequences of the crisis. They face, in a hurry, many difficulties. The stakeholders are numerous, autonomous and heterogeneous, the coexistence of contingency plans favours contradictions and the interconnections of networks promotes cascading effects. These observations arise as the volume of data available continues to grow. They come, for example, from sensors, social media or volunteers on the crisis theatre. It is an occasion to design an information system able to collect the available data to interpret them and obtain information suited to the crisis cells. To succeed, it will have to manage the 4Vs of Big Data: the Volume, the Variety and Veracity of data and information, while following the dynamic (velocity) of the current crisis. Our literature review on the different parts of this architecture enables us to define such an information system able to (i) receive different types of events emitted from data sources both known and unknown, (ii) to use interpretation rules directly deduced from official business rules and (iii) to structure the information that will be used by the stake-holders. Its architecture is event-driven and coexists with the service oriented architecture of the software developed by the CGI laboratory. The implemented system has been tested on the scenario of a 1/100 per year flood elaborated by two French forecasting centres. The model describing the current crisis situation, deduced by the proposed information system, can be used to (i) deduce a crisis response process, (ii) to detect unexpected situations, and (iii) to update a COP suited to the decision-makers
Tian, Yuandong. "Theory and Practice of Globally Optimal Deformation Estimation". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/269.
Texto completoPeša, Jan. "Prednasky.com - Systém jako modul". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236459.
Texto completoWang, Wen-Ting y 王文廷. "Free-DOM:A Free-text Document Object Model". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84044559743261981202.
Texto completo國立臺灣大學
資訊工程學研究所
94
Most documents available over the World Wide Web are written in or transformed into HTML. However, HTML is a loosely structured language that mixes presentational style with content. It is therefore important to design ways that can extract data from HTML documents. In this thesis we propose a method, Free-DOM (a Free-text Documents Object Model), for this purpose. Free-DOM is aimed at extracting data from HTML documents with a similar presentational format. It uses the regular expression to capture the structure of the format that it wants to extract, and the concept of DOM (Document Object Model) to manipulate the extracted data. Thus Free-DOM provides an extraction-and-manipulation language for free-text documents. Free-DOM supports programming languages (such as C++) as a library to pre-process and manipulate documents. It also works as a server-side script language to do value-added applications over the World Wide Web. We show the effectiveness of our method by several examples.
"The structured-element object model for XML". 2003. http://library.cuhk.edu.hk/record=b5891708.
Texto completoThesis submitted in: July 2002.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (leaves 97-101).
Abstracts in English and Chinese.
ABSTRACT --- p.II
ACKNOWLEDGEMENTS --- p.VI
TABLE OF CONTENTS --- p.VII
LIST OF TABLES --- p.XI
LIST OF FIGURES --- p.XIII
Chapter CHAPTER 1. --- INTRODUCTION --- p.1
Chapter 1.1 --- Addressing and Manipulating XML Data --- p.1
Chapter 1.2 --- The Structured-Element Object Model (SEOM) --- p.3
Chapter 1.3 --- Relate Research --- p.4
Chapter 1.4 --- Contribution --- p.5
Chapter 1.5 --- Thesis Overview --- p.6
Chapter CHAPTER 2. --- BACKGROUND TECHNOLOGIES --- p.7
Chapter 2.1 --- Overview of XML --- p.7
Chapter 2.1.1. --- XML Basic Syntax --- p.8
Chapter 2.1.2. --- Namespaces in XML --- p.8
Chapter 2.2 --- Overview of XML Schema --- p.9
Chapter 2.2.1. --- W3C XML Schema --- p.10
Chapter 2.2.2 --- ", Schema Alternatives" --- p.13
Chapter 2.3 --- Overview of XPath --- p.13
Chapter 2.4 --- Overview of DOM --- p.15
Chapter CHAPTER 3. --- OVERVIEW OF STRUCTURED-ELEMENT OBJECT MODEL (SEOM) --- p.18
Chapter 3.1 --- Introduction --- p.18
Chapter 3.2 --- Objectives --- p.20
Chapter 3.3 --- General Concepts in SEOM --- p.21
Chapter 3.3.1. --- Data Representation --- p.21
Chapter 3.3.2. --- Data Binding --- p.24
Chapter 3.3.3. --- Data Access --- p.25
Chapter CHAPTER 4. --- SEOM DOCUMENT MODELING --- p.27
Chapter 4.1 --- Data Modeling --- p.27
Chapter 4.1.1. --- Simple XML Data Model --- p.28
Chapter 4.1.2. --- SEOM Data Model --- p.32
Chapter 4.2 --- Schema Modeling --- p.41
Chapter 4.2.1. --- SEOM Schema --- p.42
Chapter 4.2.2. --- Creating a Schema --- p.46
Chapter CHAPTER 5. --- SEOM DOCUMENT PROCESSING --- p.51
Chapter 5.1 --- SEOM Document Processing --- p.51
Chapter 5.2 --- The Classes --- p.51
Chapter 5.2.1. --- SEOM Document Class --- p.52
Chapter 5.2.2. --- A bstract SElement Class --- p.55
Chapter 5.2.3. --- Generic SElement Class --- p.56
Chapter 5.2.4. --- Implementation SElement Classes --- p.57
Chapter 5.3 --- XML Parsing and Data Binding --- p.59
Chapter 5.3.1. --- Parsing Process --- p.60
Chapter 5.4 --- Querying --- p.62
Chapter 5.4.1. --- Query Wrapper and Result Wrapper --- p.62
Chapter 5.4.2. --- Embedding in XPath --- p.68
Chapter CHAPTER 6. --- AN WEB-BASED SEOM DOCUMENT QUERY SYSTEM --- p.71
Chapter 6.1 --- Web-based SEOM Document Query System --- p.71
Chapter 6.2 --- Client-Server Architecture --- p.71
Chapter 6.3 --- The Server --- p.74
Chapter 6.3.1. --- Data Loading --- p.74
Chapter 6.3.2. --- Implemented SElement - R-Tree --- p.74
Chapter 6.3.3. --- Network Interface --- p.80
Chapter 6.4 --- Client Side --- p.82
Chapter 6.4.1. --- The Interface --- p.82
Chapter 6.4.2. --- Programmatic Controls --- p.85
Chapter CHAPTER 7. --- EVALUATION --- p.88
Chapter 7.1 --- Experiment with Synthetic Data --- p.88
Chapter 7.2 --- Qualitative Comparison --- p.90
Chapter 7.3 --- Advantages --- p.91
Chapter 7.4 --- Disadvantages --- p.92
Chapter 7.5 --- Means of Enhancement --- p.93
Chapter CHAPTER 8. --- CONCLUSION --- p.94
BIBLIOGRAPHY --- p.97
Wu, Andrew y 吳剛志. "Design and Implementation of a Document Data-Model Using Object-Oriented Technology". Thesis, 1997. http://ndltd.ncl.edu.tw/handle/45879145679195895710.
Texto completo國立中央大學
資訊工程研究所
85
The goal of this thesis is to design a system that developer and user didn't take care with specific document (data) file format. In our system, software and data file format are independent. System will change the type of data to the most suitable type.
Yu, Chung-da y 余宗達. "An Efficient Mining Algorithm of Informative Subtrees Based on Pairwise Comparison on Document Object Model in Systematic Web Pages". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/45677585952226881320.
Texto completo國立成功大學
資訊工程學系碩博士班
95
With the internet growing larger and larger, the structured Web pages which consist of contiguous HTML tags generated by server side programming language are often seen. These pages are called systematic pages. For example, the listing page of the auction web site and search result pages from search engines. These pages contain rich information and large amount of data. Thus, researchers put their attention on mining informative data records from the web content. Due to Web pages only containing semi-structured HTML format data, Liu, Bing has proposed an algorithm, called MDR to extract the informative data records. By computing the similarity of sub-trees in the page, this method finds similar data records in a continuous DOM tree. This method can find continuous structured trees effectively in a systematic web page. These extracted sub-trees are considered as data regions in a page. However, MDR in real Web pages does not perform well. It needs to compute the similarities of all of continuous sub-trees. It spends large of time to compute the edit distance to obtain corresponding similarity values. Thus, this paper proposes an efficient mining algorithm to improve MDR by reducing the redundant computation. According to the property of systematic web pages, the proposed method compares the sub-trees under the same parent node pairwisely. The method reduces the number of computation of edit distance. It obtains the same result with the previous method. We also propose a Top-k extraction algorithm to mine the top-k informative blocks. According to our experimental results, the algorithm can attain the high precision rate among the selected k blocks and greatly reduce the computation cost of MDR.
Fojtů, Andrea. "Strategie, návrh, řízení a administrace rozsáhlých digitálních knihoven a archivů". Doctoral thesis, 2014. http://www.nusl.cz/ntk/nusl-338112.
Texto completo