Literatura académica sobre el tema "DOM (Document Object Model)"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "DOM (Document Object Model)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "DOM (Document Object Model)"

1

Role, François y Philippe Verdret. "Le Document Object Model (DOM)". Cahiers GUTenberg, n.º 33-34 (1999): 155–71. http://dx.doi.org/10.5802/cg.265.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Yanlong y Jinhua Liu. "Object-oriented Design based Comprehensive Experimental Development of Document Object Model". Advances in Engineering Technology Research 3, n.º 1 (7 de diciembre de 2022): 390. http://dx.doi.org/10.56028/aetr.3.1.390.

Texto completo
Resumen
JavaScript code using Document Object Model (DOM) can realize the dynamic control of Web pages, which is the important content of the Web development technology course. The application of DOM is very flexible and includes many knowledge points, so it is difficult for students to master. In order to help students to understand each knowledge point and improve their engineering ability to solve practical problems, a DOM comprehensive experiment project similar to blind box is designed and implemented. This experimental project integrates knowledge points such as DOM events, DOM operations, and communication between objects. Practice has proved that running and debugging of the project can help students to understand and master relevant knowledge points.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Radilova, Martina, Patrik Kamencay, Robert Hudec, Miroslav Benco y Roman Radil. "Tool for Parsing Important Data from Web Pages". Applied Sciences 12, n.º 23 (24 de noviembre de 2022): 12031. http://dx.doi.org/10.3390/app122312031.

Texto completo
Resumen
This paper discusses the tool for the main text and image extraction (extracting and parsing the important data) from a web document. This paper describes our proposed algorithm based on the Document Object Model (DOM) and natural language processing (NLP) techniques and other approaches for extracting information from web pages using various classification techniques such as support vector machine, decision tree techniques, naive Bayes, and K-nearest neighbor. The main aim of the developed algorithm was to identify and extract the main block of a web document that contains the text of the article and the relevant images. The algorithm on a sample of 45 web documents of different types was applied. In addition, the issue of web pages, from the structure of the document to the use of the Document Object Model (DOM) for their processing, was analyzed. The Document Object Model was used to load and navigation of the document. It also plays an important role in the correct identification of the main block of web documents. The paper also discusses the levels of natural language. These methods of automatic natural language processing help to identify the main block of the web document. In this way, the all-textual parts and images from the main content of the web document were extracted. The experimental results show that our method achieved a final classification accuracy of 88.18%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ahmad Sabri, Ily Amalina y Mustafa Man. "Improving Performance of DOM in Semi-structured Data Extraction using WEIDJ Model". Indonesian Journal of Electrical Engineering and Computer Science 9, n.º 3 (1 de marzo de 2018): 752. http://dx.doi.org/10.11591/ijeecs.v9.i3.pp752-763.

Texto completo
Resumen
<p>Web data extraction is the process of extracting user required information from web page. The information consists of semi-structured data not in structured format. The extraction data involves the web documents in html format. Nowadays, most people uses web data extractors because the extraction involve large information which makes the process of manual information extraction takes time and complicated. We present in this paper WEIDJ approach to extract images from the web, whose goal is to harvest images as object from template-based html pages. The WEIDJ (Web Extraction Image using DOM (Document Object Model) and JSON (JavaScript Object Notation)) applies DOM theory in order to build the structure and JSON as environment of programming. The extraction process leverages both the input of web address and the structure of extraction. Then, WEIDJ splits DOM tree into small subtrees and applies searching algorithm by visual blocks for each web page to find images. Our approach focus on three level of extraction; single web page, multiple web page and the whole web page. Extensive experiments on several biodiversity web pages has been done to show the comparison time performance between image extraction using DOM, JSON and WEIDJ for single web page. The experimental results advocate via our model, WEIDJ image extraction can be done fast and effectively.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sankari, S. y S. Bose. "Efficient Identification of Structural Relationships for XML Queries using Secure Labeling Schemes". International Journal of Intelligent Information Technologies 12, n.º 4 (octubre de 2016): 63–80. http://dx.doi.org/10.4018/ijiit.2016100104.

Texto completo
Resumen
XML emerged as a de-facto standard for data representation and information exchange over the World Wide Web. By utilizing document object model (DOM), XML document can be viewed as XML DOM tree. Nodes of an XML tree are labeled to uniquely identify every node by following a labeling scheme. This paper proposes a method to efficiently identify the two structural relationships namely document order (DO) and sibling relationship that exist between the XML nodes using two secure labeling schemes specifically enhanced Dewey coding (EDC) and secure Dewey coding (SDC). These structural relationships influence the performance of XML queries so they need to be identified in efficient time. This paper implements the method to identify DO and sibling relationship using EDC and SDC labels for various real-time XML documents. Experiment results show the identification of DO and sibling relationship using SDC labels performs better than EDC labels for processing XML queries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Feng, Jian, Ying Zhang y Yuqiang Qiao. "A Detection Method for Phishing Web Page Using DOM-Based Doc2Vec Model". Journal of Computing and Information Technology 28, n.º 1 (10 de julio de 2020): 19–31. http://dx.doi.org/10.20532/cit.2020.1004899.

Texto completo
Resumen
Detecting phishing web pages is a challenging task. The existing detection method for phishing web page based on DOM (Document Object Model) is mainly aiming at obtaining structural characteristics but ignores the overall representation of web pages and the semantic information that HTML tags may have. This paper regards DOMs as a natural language with Doc2Vec model and learns the structural semantics automatically to detect phishing web pages. Firstly, the DOM structure of the obtained web page is parsed to construct the DOM tree, then the Doc2Vec model is used to vectorize the DOM tree, and to measure the semantic similarity in web pages by the distance between different DOM vectors. Finally, the hierarchical clustering method is used to implement clustering of web pages. Experiments show that the method proposed in the paper achieves higher recall and precision for phishing classification, compared to DOM-based structural clustering method and TF-IDF-based semantic clustering method. The result shows that using Paragraph Vector is effective on DOM in a linguistic approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sabri, Ily Amalina Ahmad y Mustafa Man. "A performance of comparative study for semi-structured web data extraction model". International Journal of Electrical and Computer Engineering (IJECE) 9, n.º 6 (1 de diciembre de 2019): 5463. http://dx.doi.org/10.11591/ijece.v9i6.pp5463-5470.

Texto completo
Resumen
<span lang="EN-US">The extraction of information from multi-sources of web is an essential yet complicated step for data analysis in multiple domains. In this paper, we present a data extraction model based on visual segmentation, DOM tree and JSON approach which is known as Wrapper Extraction of Image using DOM and JSON (WEIDJ) for extracting semi-structured data from biodiversity web. The large number of information from multiple sources of web which is image’s information will be extracted using three different approach; Document Object Model (DOM), Wrapper image using Hybrid DOM and JSON (WHDJ) and Wrapper Extraction of Image using DOM and JSON (WEIDJ). Experiments were conducted on several biodiversity website. The experiment results show that WEIDJ approach promising results with respect to time analysis values. WEIDJ wrapper has successfully extracted greater than 100 images of data from the multi-source web biodiversity of over 15 different websites.</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ahmad Sabri, Ily Amalina y Mustafa Man. "A deep web data extraction model for web mining: a review". Indonesian Journal of Electrical Engineering and Computer Science 23, n.º 1 (1 de julio de 2021): 519. http://dx.doi.org/10.11591/ijeecs.v23.i1.pp519-528.

Texto completo
Resumen
The World Wide Web has become a large pool of information. Extracting structured data from a published web pages has drawn attention in the last decade. The process of web data extraction (WDE) has many challenges, dueto variety of web data and the unstructured data from hypertext mark up language (HTML) files. The aim of this paper is to provide a comprehensive overview of current web data extraction techniques, in termsof extracted quality data. This paper focuses on study for data extraction using wrapper approaches and compares each other to identify the best approach to extract data from online sites. To observe the efficiency of the proposed model, we compare the performance of data extraction by single web page extraction with different models such as document object model (DOM), wrapper using hybrid dom and json (WHDJ), wrapper extraction of image using DOM and JSON (WEIDJ) and WEIDJ (no-rules). Finally, the experimentations proved that WEIDJ can extract data fastest and low time consuming compared to other proposed method.<br /><div> </div>
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Liu, Shuai, Ling Li Zhao y Jun Sheng Li. "A Kind of Integrated Model for Panorama, Terrain and 3D Data Based on GML". Advanced Materials Research 955-959 (junio de 2014): 3850–53. http://dx.doi.org/10.4028/www.scientific.net/amr.955-959.3850.

Texto completo
Resumen
Panorama image can provide 360 degrees view in one hotspot, which could solve the traditional three-dimensional expression of inadequate authenticity, difficult data acquisition as well as laborious and time-consuming modeling. However, we need other geographic information. So we propose a kind of integrated model based on GML, which contains a set of data structures to obtain panorama, terrain and 3D Data rapidly from the GML file, after analyzing GML files structure and parsing by the Document Object Model (DOM). The experiment shows that integrated model is very validated in web application using PTViewer, Java 3D and Web-related technologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ran, Peipei, Wenjie Yang, Zhongyue Da y Yuke Huo. "Work orders management based on XML file in printing". ITM Web of Conferences 17 (2018): 03009. http://dx.doi.org/10.1051/itmconf/20181703009.

Texto completo
Resumen
The Extensible Markup Language (XML) technology is increasingly used in various field, if it’s used to express the information of work orders will improve efficiency for management and production. According to the features, we introduce the technology of management for work orders and get a XML file through the Document Object Model (DOM) technology in the paper. When we need the information to conduct production, parsing the XML file and save the information in database, this is beneficial to the preserve and modify for information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "DOM (Document Object Model)"

1

Kocman, Radim. "Podpora dynamického DOM v zobrazovacím stroji HTML". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236139.

Texto completo
Resumen
The aim of this work is to create an extension for rendering engine CSSBox. This extension will implement Document Object Model interface for the Client-Side JavaScript. The first part of this thesis describes the CSSBox project, the current state of JavaScript engines, Document Object Model used in web browsers and the final design based on Adapter and Abstract Factory patterns. The rest of the text describes implementation issues with the W3C DOM specification and compares the speed of this extension with web browsers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Šušlík, Martin. "Knihovna pro zpracování dokumentů RTF". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237231.

Texto completo
Resumen
The goal of this work is design and implementation of library for RTF processing. The library contains classes for conversion of  RTF files to XHTML files. Implementation of library is made in Java programming language and it is tested by using proposed set of tests. Application interface of the library is a subset of the DOM standard and allows to manipulate with final XHTML document.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

T, Kovács Gregor. "Prostředky pro implementaci rozložení webových stránek v JavaScriptu". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-412528.

Texto completo
Resumen
The aim of this work is to design and implement applications for the creation of web page layout facilities using JavaScript. The work includes the descriptions of the available methods of object positioning using the CSS given possibilities, the CSS 2.1 standard, and the difficulties of object positioning using CSS. Further, it includes the analysis of how the object placement is solved in the Java programming language using grid based layout managers GridLayout and GridBagLayout. Based on the obtained knowledge, designs are created for the solving of object placement in the creation of web pages using the grid principle. The object placement is solved by defining new HTML attributes for position determination, and also by creating a graphical editor for object placement. All the solutions are implemented using JavaScript.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ali, Imtiaz. "Object Detection in Dynamic Background". Thesis, Lyon 2, 2012. http://www.theses.fr/2012LYO20008/document.

Texto completo
Resumen
La détection et la reconnaissance d’objets dans des vidéos numériques est l’un des principaux challenges dans de nombreuses applications de vidéo surveillance. Dans le cadre de cette thèse, nous nous sommes attaqué au problème difficile de la segmentation d’objets dans des vidéos dont le fond est en mouvement permanent. Il s’agit de situations qui se produisent par exemple lorsque l’on filme des cours d’eau, ou le ciel,ou encore une scène contenant de la fumée, de la pluie, etc. Il s’agit d’un sujet assez peu étudié dans la littérature car très souvent les scènes traitées sont plutôt statiques et seules quelques parties bougent, telles que les feuillages par exemple, ou les seuls mouvements sont des changements de luminosité. La principale difficulté dans le cadre des scènes dont le fond est en mouvement est de différencier le mouvement de l’objet du mouvement du fond qui peuvent parfois être très similaires. En effet, par exemple, un objet dans une rivière peut se déplacer à la même allure que l’eau. Les algorithmes de la littérature extrayant des champs de déplacement échouent alors et ceux basés sur des modélisations de fond génèrent de très nombreuses erreurs. C’est donc dans ce cadre compliqué que nous avons tenté d’apporter des solutions.La segmentation d’objets pouvant se baser sur différents critères : couleur, texture,forme, mouvement, nous avons proposé différentes méthodes prenant en compte un ou plusieurs de ces critères.Dans un premier temps, nous avons travaillé dans un contexte bien précis qui était celui de la détection des bois morts dans des rivières. Ce problème nous a été apporté par des géographes avec qui nous avons collaboré dans le cadre du projet DADEC (Détection Automatique de Débris pour l’Aide à l’Etude des Crues). Dans ce cadre, nous avons proposé deux méthodes l’une dite " naïve " basée sur la couleur des objets à détecter et sur leur mouvement et l’autre, basée sur une approche probabiliste mettant en oeuvre une modélisation de la couleur de l’objet et également basée sur leur déplacement. Nous avons proposé une méthode pour le comptage des bois morts en utilisant les résultats des segmentations.Dans un deuxième temps, supposant la connaissance a priori du mouvement des objets,dans un contexte quelconque, nous avons proposé un modèle de mouvement de l’objet et avons montré que la prise en compte de cet a priori de mouvement permettait d’améliorer nettement les résultats des segmentations obtenus par les principaux algorithmes de modélisation de fond que l’on trouve dans la littérature.Enfin, dans un troisième temps, en s’inspirant de méthodes utilisées pour caractériser des textures 2D, nous avons proposé un modèle de fond basé sur une approche fréquentielle.Plus précisément, le modèle prend en compte non seulement le voisinage spatial d’un pixel mais également le voisinage temporel de ce dernier. Nous avons appliqué la transformée de Fourier locale au voisinage spatiotemporel d’un pixel pour construire un modèle de fond.Nous avons appliqué nos méthodes sur plusieurs vidéos, notamment les vidéos du projet DADEC, les vidéos de la base DynTex, des vidéos synthétiques et des vidéos que nous avons faites
Moving object detection is one of the main challenges in many video monitoring applications.In this thesis, we address the difficult problem that consists in object segmentation when background moves permanently. Such situations occur when the background contains water flow, smoke or flames, snowfall, rainfall etc. Object detection in moving background was not studied much in the literature so far. Video backgrounds studied in the literature are often composed of static scenes or only contain a small portion of moving regions (for example, fluttering leaves or brightness changes). The main difficulty when we study such situations is to differentiate the objects movements and the background movements that may be almost similar. For example, an object in river moves at the same speed as water. Therefore, motion-based techniques of the literature, relying on displacements vectors in the scene, may fail to discriminate objects from the background, thus generating a lot of false detections. In this complex context, we propose some solutions for object detection.Object segmentation can be based on different criteria including color, texture, shape and motion. We propose various methods taking into account one or more of these criteria.We first work on the specific context of wood detection in rivers. It is a part of DADEC project (Détection Automatique de Débris pour l’Aide à l’Etude des Crues) in collaboration with geographers. We propose two approaches for wood detection: a naïve method and the probabilistic image model. The naïve approach is based on binary decisions based on object color and motion, whereas the probabilistic image model uses wood intensity distribution with pixel motion. Such detection methods are used fortracking and counting pieces of wood in rivers.Secondly, we consider a context in which we suppose a priori knowledge about objectmotion is available. Hence, we propose to model and incorporate this knowledge into the detection process. We show that combining this prior motion knowledge with classical background model improves object detection rate.Finally, drawing our inspiration from methods used for 2D texture representation, we propose to model moving backgrounds using a frequency-based approach. More precisely, the model takes into account the spatial neighborhoods of pixels but also their temporal neighborhoods. We apply local Fourier transform on the obtained regions in order to extract spatiotemporal color patterns.We apply our methods on multiple videos, including river videos under DADEC project, image sequences from the DynTex video database, several synthetic videos andsome of our own made videos. We compare our object detection results with the existing methods for real and synthetic videos quantitatively as well as qualitatively
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Selim, Hossam Abdelatif Mohamed. "A novel secure autonomous generalized document model using object oriented technique". Thesis, University of Kent, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269141.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Eklund, Henrik. "OBJECT DETECTION: MODEL COMPARISON ON AUTOMATED DOCUMENT CONTENT INTERPRETATION - A performance evaluation of common object detection models". Thesis, Umeå universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-164767.

Texto completo
Resumen
Manually supervising a million documents yearly becomes an exhaustive task. A step towards automating this process is to interpret and classify the contents of a document. This thesis examines and evaluates different object detection models in the task to localize and classify multiple objects within a document to find the best model for the situation. The theory of artificial neural networks is explained, and the convolutional neural network and how it is used to perform object detection. Googles Tensorflow Object Detection API is implemented and models and backbones are configured for training and evaluation. Data is collected to construct the data set, and useful metrics to use in the evaluation pipeline are explained. The models are compared both category wise and summarized, using mean average precision (mAP) over several intersection over union (IoU) thresholds, inference time, memory usage, optimal localization recall precision (oLRP) error and using optimized thresholds based on localization, recall and precision. Finally we determine if some model is better suited for certain situations and tasks. When using optimal confidence thresholds different models performed best on different categories. The overall best detector for the task was considered R-FCN inceptionv2 based on its detection performance, speed and memory usage. A reflection of the results and evaluation methods are discussed and strategies for improvement mentioned as future work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Manfredi, Guido. "Learning objects model and context for recognition and localisation". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30386/document.

Texto completo
Resumen
Cette thèse traite des problèmes de modélisation, reconnaissance, localisation et utilisation du contexte pour la manipulation d'objets par un robot. Le processus de modélisation se divise en quatre composantes : le système réel, les données capteurs, les propriétés à reproduire et le modèle. En spécifiant chacune des ces composantes, il est possible de définir un processus de modélisation adapté au problème présent, la manipulation d'objets par un robot. Cette analyse mène à l'adoption des descripteurs de texture locaux pour la modélisation. La modélisation basée sur des descripteurs de texture locaux a été abordé dans de nombreux travaux traitant de structure par le mouvement (SfM) ou de cartographie et localisation simultanée (SLAM). Les méthodes existantes incluent Bundler, Roboearth et 123DCatch. Pourtant, aucune de ces méthodes n'a recueilli le consensus. En effet, l'implémentation d'une approche similaire montre que ces outils sont difficiles d'utilisation même pour des utilisateurs experts et qu'ils produisent des modèles d'une haute complexité. Cette complexité est utile pour fournir un modèle robuste aux variations de point de vue. Il existe deux façons pour un modèle d'être robuste : avec le paradigme des vues multiple ou celui des descripteurs forts. Dans le paradigme des vues multiples, le modèle est construit à partir d'un grand nombre de points de vue de l'objet. Le paradigme des descripteurs forts compte sur des descripteurs résistants aux changements de points de vue. Les expériences réalisées montrent que des descripteurs forts permettent d'utiliser un faible nombre de vues, ce qui résulte en un modèle simple. Ces modèles simples n'incluent pas tout les point de vus existants mais les angles morts peuvent être compensés par le fait que le robot est mobile et peut adopter plusieurs points de vue. En se basant sur des modèles simples, il est possible de définir des méthodes de modélisation basées sur des images seules, qui peuvent être récupérées depuis Internet. A titre d'illustration, à partir d'un nom de produit, il est possible de récupérer des manières totalement automatiques des images depuis des magasins en ligne et de modéliser puis localiser les objets désirés. Même avec une modélisation plus simple, dans des cas réel ou de nombreux objets doivent être pris en compte, il se pose des problèmes de stockage et traitement d'une telle masse de données. Cela se décompose en un problème de complexité, il faut traiter de nombreux modèles rapidement, et un problème d'ambiguïté, des modèles peuvent se ressembler. L'impact de ces deux problèmes peut être réduit en utilisant l'information contextuelle. Le contexte est toute information non issue des l'objet lui même et qui aide a la reconnaissance. Ici deux types de contexte sont abordés : le lieu et les objets environnants. Certains objets se trouvent dans certains endroits particuliers. En connaissant ces liens lieu/objet, il est possible de réduire la liste des objets candidats pouvant apparaître dans un lieu donné. Par ailleurs l'apprentissage du lien lieu/objet peut être fait automatiquement par un robot en modélisant puis explorant un environnement. L'information appris peut alors être fusionnée avec l'information visuelle courante pour améliorer la reconnaissance. Dans les cas des objets environnants, un objet peut souvent apparaître au cotés d'autres objets, par exemple une souris et un clavier. En connaissant la fréquence d'apparition d'un objet avec d'autres objets, il est possible de réduire la liste des candidats lors de la reconnaissance. L'utilisation d'un Réseau de Markov Logique est particulièrement adaptée à la fusion de ce type de données. Cette thèse montre la synergie de la robotique et du contexte pour la modélisation, reconnaissance et localisation d'objets
This Thesis addresses the modeling, recognition, localization and use of context for objects manipulation by a robot. We start by presenting the modeling process and its components: the real system, the sensors' data, the properties to reproduce and the model. We show how, by specifying each of them, one can define a modeling process adapted to the problem at hand, namely object manipulation by a robot. This analysis leads us to the adoption of local textured descriptors for object modeling. Modeling with local textured descriptors is not a new concept, it is the subject of many Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) works. Existing methods include bundler, roboearth modeler and 123DCatch. Still, no method has gained widespread adoption. By implementing a similar approach, we show that they are hard to use even for expert users and produce highly complex models. Such complex techniques are necessary to guaranty the robustness of the model to view point change. There are two ways to handle the problem: the multiple views paradigm and the robust features paradigm. The multiple views paradigm advocate in favor of using a large number of views of the object. The robust feature paradigm relies on robust features able to resist large view point changes. We present a set of experiments to provide an insight into the right balance between both. By varying the number of views and using different features we show that small and fast models can provide robustness to view point changes up to bounded blind spots which can be handled by robotic means. We propose four different methods to build simple models from images only, with as little a priori information as possible. The first one applies to planar or piecewise planar objects and relies on homographies for localization. The second approach is applicable to objects with simple geometry, such as cylinders or spheres, but requires many measures on the object. The third method requires the use of a calibrated 3D sensor but no additional information. The fourth technique doesn't need a priori information at all. We apply this last method to autonomous grocery objects modeling. From images automatically retrieved from a grocery store website, we build a model which allows recognition and localization for tracking. Even using light models, real situations ask for numerous object models to be stored and processed. This poses the problems of complexity, processing multiple models quickly, and ambiguity, distinguishing similar objects. We propose to solve both problems by using contextual information. Contextual information is any information helping the recognition which is not directly provided by sensors. We focus on two contextual cues: the place and the surrounding objects. Some objects are mainly found in some particular places. By knowing the current place, one can restrict the number of possible identities for a given object. We propose a method to autonomously explore a previously labeled environment and establish a correspondence between objects and places. Then this information can be used in a cascade combining simple visual descriptors and context. This experiment shows that, for some objects, recognition can be achieved with as few as two simple features and the location as context. The objects surrounding a given object can also be used as context. Objects like a keyboard, a mouse and a monitor are often close together. We use qualitative spatial descriptors to describe the position of objects with respect to their neighbors. Using a Markov Logic Network, we learn patterns in objects disposition. This information can then be used to recognize an object when surrounding objects are already identified. This Thesis stresses the good match between robotics, context and objects recognition
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Tuong, Frédéric. "Constructing Semantically Sound Object-Logics for UML/OCL Based Domain-Specific Languages". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS085/document.

Texto completo
Resumen
Les langages de spécifications basés et orientés objets (comme UML/OCL, JML, Spec#, ou Eiffel) permettent la création et destruction, la conversion et tests de types dynamiques d'objets statiquement typés. Par dessus, les invariants de classes et les opérations de contrat peuvent y être exprimés; ces derniers représentent les éléments clés des spécifications orientées objets. Une sémantique formelle des structures de données orientées objets est complexe : des descriptions imprécises mènent souvent à différentes interprétations dans les outils qui en résultent. Dans cette thèse, nous démontrons comment dériver un environnement de preuves moderne comme un méta-outil pour la définition et l'analyse de sémantique formelle de langages de spécifications orientés objets. Étant donné une représentation d'un langage particulier plongé en Isabelle/HOL, nous construisons pour ce langage un environnement étendu d'Isabelle, à travers une méthode de génération de code particulière, qui implique notamment plusieurs variantes de génération de code. Le résultat supporte l'édition asynchrone, la vérification de types, et les activités de déduction formelle, tous "hérités" d'Isabelle. En application de cette méthode, nous obtenons un outil de modélisation orienté objet pour du UML/OCL textuel. Nous intégrons également des idiomes non nécessairement présent dans UML/OCL --- en d'autres termes, nous développons un support pour des dialectes d'UML/OCL à domaine spécifique. En tant que construction méta, nous définissons un méta-modèle d'une partie d'UML/OCL en HOL, un méta-modèle d'une partie de l'API d'Isabelle en HOL, et une fonction de traduction entre eux en HOL. Le méta-outil va alors exploiter deux procédés de générations de code pour produire soit du code raisonnablement efficace, soit du code raisonnablement lisible. Cela fournit donc deux modes d'animations pour inspecter plus en détail la sémantique d'un langage venant d'être plongé : en chargeant à vitesse réelle sa sémantique, ou simplement en retardant à un autre niveau "méta" l'expérimentation précédente pour un futur instant de typage en Isabelle, que ce soit pour des raisons de performances, de tests ou de prototypages. Remarquons que la génération de "code raisonnablement efficace", et de "code raisonnablement lisible" incluent la génération de code tactiques qui prouvent une collection de théorèmes formant une théorie de types de données orientés objets d'un modèle dénotationnel : étant donné un modèle de classe UML/OCL, les preuves des propriétés pertinentes aux conversions, tests de types, constructeurs et sélecteurs sont traitées automatiquement. Cette fonctionnalité est similaire aux paquets de théories de types de données présents au sein d'autres prouveurs de la famille HOL, à l'exception que certaines motivations ont conduit ce travail présent à programmer des tactiques haut-niveaux en HOL lui-même. Ce travail prend en compte les plus récentes avancées du standard d'UML/OCL 2.5. Par conséquent, tous les types UML/OCL ainsi que les types logiques distinguent deux éléments d'exception différents : invalid (exception) et null (élément non-existant). Cela entraîne des conséquences sur les propriétés aussi bien logiques qu'algébriques des structures orientées objets résultant des modèles de classes. Étant donné que notre construction est réduite à une séquence d'extension conservative de théorie, notre approche peut garantir la correction logique du langage entier considéré, et fournit une méthodologie pour étendre formellement des langages à domaine spécifique
Object-based and object-oriented specification languages (likeUML/OCL, JML, Spec#, or Eiffel) allow for the creation and destruction, casting and test for dynamic types of statically typed objects. On this basis, class invariants and operation contracts can be expressed; the latter represent the key elements of object-oriented specifications. A formal semantics of object-oriented data structures is complex: imprecise descriptions can often imply different interpretations in resulting tools. In this thesis we demonstrate how to turn a modern proof environment into a meta-tool for definition and analysis of formal semantics of object-oriented specification languages. Given a representation of a particular language embedded in Isabelle/HOL, we build for this language an extended Isabelle environment by using a particular method of code generation, which actually involves several variants of code generation. The result supports the asynchronous editing, type-checking, and formal deduction activities, all "inherited" from Isabelle. Following this method, we obtain an object-oriented modelling tool for textual UML/OCL. We also integrate certain idioms not necessarily present in UML/OCL --- in other words, we develop support for domain-specific dialects of UML/OCL. As a meta construction, we define a meta-model of a part of UML/OCL in HOL, a meta-model of a part of the Isabelle API in HOL, and a translation function between both in HOL. The meta-tool will then exploit two kinds of code generation to produce either fairly efficient code, or fairly readable code. Thus, this provides two animation modes to inspect in more detail the semantics of a language being embedded: by loading at a native speed its semantics, or just delay at another "meta"-level the previous experimentation for another type-checking time in Isabelle, be it for performance, testing or prototyping reasons. Note that generating "fairly efficient code", and "fairly readable code" include the generation of tactic code that proves a collection of theorems forming an object-oriented datatype theory from a denotational model: given a UML/OCL class model, the proof of the relevant properties for casts, type-tests, constructors and selectors are automatically processed. This functionality is similar to the datatype theory packages in other provers of the HOL family, except that some motivations have conducted the present work to program high-level tactics in HOL itself. This work takes into account the most recent developments of the UML/OCL 2.5 standard. Therefore, all UML/OCL types including the logic types distinguish two different exception elements: invalid (exception) and null (non-existing element). This has far-reaching consequences on both the logical and algebraic properties of object-oriented data structures resulting from class models. Since our construction is reduced to a sequence of conservative theory extensions, the approach can guarantee logical soundness for the entire considered language, and provides a methodology to soundly extend domain-specific languages
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Almuhisen, Feda. "Leveraging formal concept analysis and pattern mining for moving object trajectory analysis". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0738/document.

Texto completo
Resumen
Cette thèse présente un cadre de travail d'analyse de trajectoires contenant une phase de prétraitement et un processus d’extraction de trajectoires d’objets mobiles. Le cadre offre des fonctions visuelles reflétant le comportement d'évolution des motifs de trajectoires. L'originalité de l’approche est d’allier extraction de motifs fréquents, extraction de motifs émergents et analyse formelle de concepts pour analyser les trajectoires. A partir des données de trajectoires, les méthodes proposées détectent et caractérisent les comportements d'évolution des motifs. Trois contributions sont proposées : Une méthode d'analyse des trajectoires, basée sur les concepts formels fréquents, est utilisée pour détecter les différents comportements d’évolution de trajectoires dans le temps. Ces comportements sont “latents”, "emerging", "decreasing", "lost" et "jumping". Ils caractérisent la dynamique de la mobilité par rapport à l'espace urbain et le temps. Les comportements détectés sont visualisés sur des cartes générées automatiquement à différents niveaux spatio-temporels pour affiner l'analyse de la mobilité dans une zone donnée de la ville. Une deuxième méthode basée sur l'extraction de concepts formels séquentiels fréquents a également été proposée pour exploiter la direction des mouvements dans la détection de l'évolution. Enfin, une méthode de prédiction basée sur les chaînes de Markov est présentée pour prévoir le comportement d’évolution dans la future période pour une région. Ces trois méthodes sont évaluées sur ensembles de données réelles . Les résultats expérimentaux obtenus sur ces données valident la pertinence de la proposition et l'utilité des cartes produites
This dissertation presents a trajectory analysis framework, which includes both a preprocessing phase and trajectory mining process. Furthermore, the framework offers visual functions that reflect trajectory patterns evolution behavior. The originality of the mining process is to leverage frequent emergent pattern mining and formal concept analysis for moving objects trajectories. These methods detect and characterize pattern evolution behaviors bound to time in trajectory data. Three contributions are proposed: (1) a method for analyzing trajectories based on frequent formal concepts is used to detect different trajectory patterns evolution over time. These behaviors are "latent", "emerging", "decreasing", "lost" and "jumping". They characterize the dynamics of mobility related to urban spaces and time. The detected behaviors are automatically visualized on generated maps with different spatio-temporal levels to refine the analysis of mobility in a given area of the city, (2) a second trajectory analysis framework that is based on sequential concept lattice extraction is also proposed to exploit the movement direction in the evolution detection process, and (3) prediction method based on Markov chain is presented to predict the evolution behavior in the future period for a region. These three methods are evaluated on two real-world datasets. The obtained experimental results from these data show the relevance of the proposal and the utility of the generated maps
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Saillenfest, Melaine. "Théories séculaires et dynamique orbitale au-delà de Neptune". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEO006/document.

Texto completo
Resumen
La structure dynamique de la région transneptunienne est encore loin d'être entièrement comprise, surtout concernant les objets ayant un périhélie très éloigné. Dans cette région, les perturbations orbitales sont très faibles, autant de l'intérieur (les planètes) que de l'extérieur (les étoiles de passage et les marées galactiques). Pourtant, de nombreux objets ont des orbites très excentriques, ce qui indique qu'ils ne se sont pas formés tels qu'on les observe actuellement. De plus, certaines accumulations dans la distribution de leurs éléments orbitaux ont attiré l'attention de la communauté scientifique, conduisant à de nombreuses conjectures sur l'origine et l'évolution du Système Solaire externe.Avant d'envisager des théories plus "exotiques", une analyse exhaustive doit être menée sur les différents mécanismes qui peuvent reproduire les trajectoires observées à partir de ce qui est jugé "certain" dans la dynamique du Système Solaire, à savoir les perturbations par les planètes connues et par les marées galactiques. Cependant, nous ne pouvons pas nous fier uniquement aux simulations numériques pour explorer efficacement l'espace des comportements possibles. Dans ce contexte, notre objectif est de dégager une vision globale de la dynamique entre Neptune et le nuage de Oort, y compris les orbites les plus extrêmes (même si elles sont improbables ?).Les orbites entièrement extérieures à la région planétaire peuvent être divisées en deux classes générales : d'un côté, les objets soumis à une diffusion du demi grand-axe (ce qui empêche toute variation importante du périhélie) ; de l'autre côté les objets qui présentent une dynamique intégrable à court terme (ou quasi-intégrable). La dynamique de ces derniers peut être décrite par des modèles séculaires. Il existe deux sortes d'orbites régulières : les orbites non résonnantes (demi grand-axe fixe) et celles piégées dans une résonance de moyen mouvement avec une planète (demi grand-axe oscillant).La majeur partie de ce travail de thèse se concentre sur le développement de modèles séculaires pour les objets transneptuniens, dans les cas non résonnant et résonnant. Des systèmes à un degré de liberté peuvent être obtenus, ce qui permet de représenter chaque trajectoire par une courbe de niveau du hamiltonien. Ce type de formalisme est très efficace pour explorer l'espace des paramètres. Il révèle des trajectoires menant à des périhélies éloignés, de même que des "mécanismes de captures", capables de maintenir les objets sur des orbites très distantes pendant des milliards d'années. L'application du modèle séculaire résonnant aux objets connus est également très instructive, car elle montre graphiquement quelles orbites observées nécessitent un scénario complexe (comme la migration planétaire ou un perturbateur extérieur), et lesquelles peuvent être expliquées par l'influence des planètes connues. Dans ce dernier cas, l'histoire dynamique des petits corps peut être retracée depuis leur capture en résonance.La dernière partie de ce travail est consacrée à l'extension du modèle séculaire non résonnant au cas d'un perturbateur extérieur massif. S'il est doté d'une excentricité et/ou d'une inclinaison non négligeable, cela introduit un, voire deux degrés de liberté supplémentaires dans le système, d'où une dynamique en général non intégrable. Dans ce cas, l'analyse peut être réalisée à l'aide de sections de Poincaré, qui permettent de distinguer les régions chaotiques et régulières de l'espace des phases. Pour des demi grands-axes croissants, le chaos se propage très rapidement. Les structures les plus persistantes sont des résonances séculaires produisant des trajectoires alignées ou anti-alignées avec la planète distante
The dynamical structure of the transneptunian region is still far from being fully understood, especially concerning high-perihelion objects. In that region, the orbital perturbations are very weak, both from inside (the planets) and from outside (passing stars and galactic tides). However, numerous objects have very eccentric orbits, which indicates that they did not form in their current orbital state. Furthermore, some intriguing clusters in the distribution of their orbital elements have attracted attention of the scientific community, leading to numerous conjectures about the origin and evolution of the external Solar System.Before thinking of "exotic" theories, an exhaustive survey has to be conducted on the different mechanisms that could produce the observed trajectories involving only what we take for granted about the Solar System dynamics, that is the orbital perturbations by the known planets and/or by galactic tides. However, we cannot rely only on numerical integrations to efficiently explore the space of possible behaviours. In that context, we aim at developing a general picture of the dynamics between Neptune and the Oort Cloud, including the most extreme (even if improbable?) orbits.The orbits entirely exterior to the planetary region can be divided into two broad classes: on the one hand, the objects undergoing a diffusion of semi-major axis (which prevents from large variation of the perihelion distance); on the other hand, the objects which present an integrable (or quasi-integrable) dynamics on a short time-scale. The dynamics of the latter can be described by secular models. There are two kinds of regular orbits: the non-resonant ones (fixed semi-major axis) and those trapped in a mean-motion resonance with a planet (oscillating semi-major axis).The major part of this Ph.D. work is focussed on the development of secular models for transneptunian objects, both in the non-resonant and resonant cases. One-degree-of-freedom systems can be obtained, which allows to represent any trajectory by a level curve of the Hamiltonian. Such a formalism is pretty efficient to explore the parameter space. It reveals pathways to high perihelion distances, as well as "trapping mechanisms", able to maintain the objects on very distant orbits for billion years. The application of the resonant secular model to the known objects is also very informative, since it shows graphically which observed orbits require a complex scenario (as the planetary migration or an external perturber), and which ones can be explained by the influence of the known planets. In this last case, the dynamical history of the small bodies can be tracked back to the resonance capture.The last part of this work is devoted to the extension of the non-resonant secular model to the case of an external massive perturber. If it has a substantial eccentricity and/or inclination, it introduces one or two more degrees of freedom in the system, so the secular dynamics is non integrable in general. In that case, the analysis can be realised by Poincaré sections, which allow to distinguish the chaotic regions of the phase space from the regular ones. For increasing semi-major axes, the chaos spreads very fast. The most persistent structures are secular resonances producing trajectories aligned or anti-aligned with the orbit of the distant planet
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "DOM (Document Object Model)"

1

Jeffrey, Sambells, ed. DOM scripting: Web design with JavaScript and the Document Object Model. 2a ed. New York, NY: Apress, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Heilmann, Christian. Beginning JavaScript with DOM scripting and Ajax: From novice to professional. Berkeley, CA: Apress, 2006.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Accelerated DOM scripting with Ajax, APIs, and libraries. Berkeley, CA: Apress, 2007.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Cameron, Adams, ed. Web standards creativity: Innovations in web design with XHTML, CSS, and DOM scripting. [Berkeley, Calif.]: Friendsof ED, 2007.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

The Document object model: Processing structured documents. New York: McGraw-Hill/Osborne, 2002.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

jQuery design patterns: Learn the best practices on writing efficient jQuery applications to maximize performance in large-scale deployments. Birmingham: Packt Publishing, 2016.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Document Object Model. New York: McGraw-Hill, 2003.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sambells, Jeffrey y Jeremy Keith. DOM Scripting: Web Design with JavaScript and the Document Object Model. Apress, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Sambells, Jeffrey y Jeremy Keith. DOM Scripting: Web Design with JavaScript and the Document Object Model. friends of ED limited, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

(Foreword), Dave Shea, ed. DOM Scripting: Web Design with JavaScript and the Document Object Model. friends of ED, 2005.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "DOM (Document Object Model)"

1

Keith, Jeremy y Jeffrey Sambells. "The Document Object Model". En DOM Scripting, 31–44. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-3390-9_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ayesh, Aladdin. "Document Object Model (DOM)". En Essential Dynamic HTML fast, 102–10. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0363-9_13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Richards, Robert. "Document Object Model (DOM)". En Pro PHP XML and Web Services, 181–238. Berkeley, CA: Apress, 2006. http://dx.doi.org/10.1007/978-1-4302-0139-7_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Foo, Soo Mee y Wei Meng Lee. "The Document Object Model (DOM)". En XML Programming Using the Microsoft XML Parser, 107–49. Berkeley, CA: Apress, 2002. http://dx.doi.org/10.1007/978-1-4302-0829-7_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Olsson, Mikael. "Document Object Model". En JavaScript Quick Syntax Reference, 39–44. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-6494-1_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Shekhar, Shashi y Hui Xiong. "Document Object Model". En Encyclopedia of GIS, 254. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-35973-1_323.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Tomar, Ravi y Sarishma Dangi. "Document Object Model". En JavaScript, 145–78. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003122364-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Rothfuss, Gunther y Christian Ried. "Das Document Object Model". En Xpert.press, 273–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55656-2_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rothfuss, Gunther y Christian Ried. "Das Document Object Model". En Xpert.press, 227–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-642-98075-6_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Resig, John, Russ Ferguson y John Paxton. "The Document Object Model". En Pro JavaScript Techniques, 49–72. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-6392-0_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "DOM (Document Object Model)"

1

Pan, Chunxia y Shana Smith. "Extracting Geometrical Data From CAD STEP Files". En ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/cie-48224.

Texto completo
Resumen
Most CAD tools currently do not have advanced capability built in for directly analyzing the feasibility of product assembly during the design stage. As a result, the feasibility of product assembly has to be analyzed using external assembly analysis tools. To integrate external assembly analysis tools with CAD tools, geometrical data from the CAD models must be extracted from CAD files and imported into the assembly analysis tools. To transfer geometrical data between CAD tools and assembly analysis tools, a neutral file format is needed. STEP (standard for the exchange of product model data) is a neutral file format for convenient and reliable data exchange between different design and manufacturing systems. Therefore, STEP is considered as one of the most popular formats for saving designs. As a result, to evaluate designs, extracting geometrical data from CAD-STEP file is important. Comparing with some other currently available methods, first translating STEP file into XML and then using Java DOM to get geometrical information from XML file is much better. This paper explores the process of extracting geometrical data from a CAD-STEP file for assembly sequence planning. A STEPXML translator, Java-XML parser and DTD (document type definition) generator are used on the JDK 1.4 platform. In this project, DOM (document object model) is applied as the API (Application Programming Interface) for XML.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Andersson, Fredrik, Patrik Nilsson y Hans Johannesson. "Computer Based Requirement and Concept Modelling: Information Gathering and Classification". En ASME 2000 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/detc2000/dtm-14561.

Texto completo
Resumen
Abstract This paper proposes a requirement and concept model based on a functional decomposition of mechanical systems. It is an object-oriented approach to integrate the representation of the design artefact and the design activity, through the decisions made during the design evolution. The requirements co-evolve simultaneously with the formation of the conceptual layout, through the opportunity to alter between function and physical/abstract solutions. This approach structures the design requirements and concepts in such a way that it supports the ability to document their sources, to allow for validation and verifications of both requirements and design solutions. First, the proposed model is presented from a theoretical viewpoint. Secondly, a methodology for modelling requirements and concepts in an object-oriented fashion is discussed. Finally, the model is implemented in METIS software and tested in a case study of an electric window winder on a truck door.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mindrup, Matthew. "La Réaction Poètique of a Prepared Mind". En LC2015 - Le Corbusier, 50 years later. Valencia: Universitat Politècnica València, 2015. http://dx.doi.org/10.4995/lc2015.2015.677.

Texto completo
Resumen
Abstract: This paper explores Le Corbusier’s practice of collecting and studying everyday objects as inspiration for new architectural ideas. An avid collector of ‘objets trouves’ that Le Corbusier referred to specifically as ‘objets à réaction poètique,’ he promoted their use claiming they gave direction to an imagination that alone might not be able to detect. Perhaps the most famous object in Le Corbusier’s collection was a crab shell that he used as inspiration for the design of the roof for his Notre-Dame du Haut chapel in Ronchamp, France. Although Le Corbusier’s use of this shell is well documented in studies on his oeuvre, little attention has been given to the role he intended found objects to play in his design process. In themselves these objects, which have their own identities as shells, pinecones or pieces of bone, they do not immediately lend themselves to any architectural solution. Rather, they are evidence of Le Corbusier’s unique approach to design that relies on a what Louis Pasteur referred to as a ‘prepared mind,’ availed of all relevant data and information pertaining to a task, that can search for solutions in random object or events by spontaneously shift back and forth between analytic and associative modes of thought. Keywords: Architectural model, Ronchamp, Design method, Imagination, Play, Objet trouve. DOI: http://dx.doi.org/10.4995/LC2015.2015.677
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Alcic, Sadet y Stefan Conrad. "2-DOM: A 2-Dimensional Object Model towards Web Image Annotation". En 2008 Third International Workshop on Semantic Media Adaptation and Personalization (SMAP). IEEE, 2008. http://dx.doi.org/10.1109/smap.2008.23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Xuesong y Honglei Wang. "AJAX Crawling Scheme Based on Document Object Model". En 2012 Fourth International Conference on Computational and Information Sciences (ICCIS). IEEE, 2012. http://dx.doi.org/10.1109/iccis.2012.61.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pereira, Oscar M., Rui L. Aguiar y Maribel Yasmina Santos. "CRUD-DOM: A Model for Bridging the Gap between the Object-Oriented and the Relational Paradigms". En 2010 Fifth International Conference on Software Engineering Advances (ICSEA). IEEE, 2010. http://dx.doi.org/10.1109/icsea.2010.25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fouad, Toufik y Bahaj Mohamed. "Model Transformation From Object Relational Database to NoSQL Document Database". En the 2nd International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3320326.3320381.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sevilla, Diego, Severino Feliciano y Jesús García-Molina. "An MDE Approach to Generate Schemas for Object-document Mappers". En 5th International Conference on Model-Driven Engineering and Software Development. SCITEPRESS - Science and Technology Publications, 2017. http://dx.doi.org/10.5220/0006279102200228.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Al-Taee, Majid A., Suhail N. Abood y Nada Y. Philip. "A Human-Robot Sub-dialogues Structure Using XML Document Object Model". En 2013 International Conference on Developments in eSystems Engineering (DeSE). IEEE, 2013. http://dx.doi.org/10.1109/dese.2013.29.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hyunseung Son y Byounghee Son. "Design of the XML Document Object Model for the home management server". En 2010 5th International Conference on Computer Sciences and Convergence Information Technology (ICCIT 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccit.2010.5711040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "DOM (Document Object Model)"

1

Ross, Andrew, David Johnson, Hai Le, Danny Griffin, Carl Mudd y David Dawson. USACE Advanced Modeling Object Standard : Release 1.0. Engineer Research and Development Center (U.S.), septiembre de 2021. http://dx.doi.org/10.21079/11681/42152.

Texto completo
Resumen
The U.S. Army Corps of Engineers (USACE) Advanced Modeling Object Standard (AMOS) has been developed by the CAD/BIM Technology Center for Facilities, Infrastructure, and Environment to establish standards for support of the Advanced Modeling process within the Department of Defense (DoD) and the Federal Government. The critical component of Advanced Modeling is the objects themselves- and either make the modeling process more difficult or more successful. This manual is part of an initiative to develop a nonproprietary Advanced Modeling standard that incorporates both vertical construction and horizontal construction objects that will address the entire life cycle of facilities within the DoD. The material addressed in this USACE Advanced Modeling Object Standard includes a classification organization that is needed to identify models for specific use cases. Compliance with this standard will allow users to know whether the object model they are getting is graphically well developed but data poor or if it does have the data needed for creating contract documents. This capability will greatly reduce the designers’ efforts to either build an object or search/find/edit an object necessary for the development of their project. Considering that an advanced model may contain hundreds of objects this would represent a huge time savings and improve the modeling process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Montanez, Carmelo, D. Richard Kuhn, Mary Brady, Richard M. Rivello, Jenise Reyes y Michael K. Powers. An application of combinatorial methods to conformance testing for document object model events. Gaithersburg, MD: National Institute of Standards and Technology, 2010. http://dx.doi.org/10.6028/nist.ir.7773.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kuznetsov, Victor, Vladislav Litvinenko, Egor Bykov y Vadim Lukin. A program for determining the area of the object entering the IR sensor grid, as well as determining the dynamic characteristics. Science and Innovation Center Publishing House, abril de 2021. http://dx.doi.org/10.12731/bykov.0415.15042021.

Texto completo
Resumen
Currently, to evaluate the dynamic characteristics of objects, quite a large number of devices are used in the form of chronographs, which consist of various optical, thermal and laser sensors. Among the problems of these devices, the following can be distinguished: the lack of recording of the received data; the inaccessibility of taking into account the trajectory of the object flying in the sensor area, as well as taking into consideration the trajectory of the object during the approach to the device frame. The signal received from the infrared sensors is recorded in a separate document in txt format, in the form of a table. When you turn to the document, data is read from the current position of the input data stream in the specified list by an argument in accordance with the given condition. As a result of reading the data, it forms an array that includes N number of columns. The array is constructed in a such way that the first column includes time values, and columns 2...N- the value of voltage . The algorithm uses cycles that perform the function of deleting array rows where there is a fact of exceeding the threshold value in more than two columns, as well as rows where the threshold level was not exceeded. The modified array is converted into two new arrays, each of which includes data from different sensor frames. An array with the coordinates of the centers of the sensor operation zones was created to apply the Pythagorean theorem in three-dimensional space, which is necessary for calculating the exact distance between the zones. The time is determined by the difference in the response of the first and second sensor frames. Knowing the path and time, we are able to calculate the exact speed of the object. For visualization, the oscillograms of each sensor channel were displayed, and a chronograph model was created. The chronograph model highlights in purple the area where the threshold has been exceeded.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía