To see the other types of publications on this topic, follow the link: XML applications.

Dissertations / Theses on the topic 'XML applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'XML applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Jiemin. "Evaluations on XML standards for actual applications." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5538.

Full text
Abstract:
XML is a popular data format today, particular for data exchange over the web or among applications. Standards based on XML are widely used in lots of areas and for a number of applications. However, XML standards are still in their early stage therefore it is interesting to explore how they actually work in reality. In this thesis, we evaluate XML standards when using for actual applications, and propose a set of hypotheses concluding both advantages and challenges of using XML in real applications. We verify our hypotheses through a case study on a civil engineering application, and also review a few other areas to support them in general. Our study shows that XML standards have powerful strengths that make XML an important medium to support data exchange, but hard to use and far from perfect.
APA, Harvard, Vancouver, ISO, and other styles
2

Kwong, April P. "Tree pattern constraints for XML : theory and applications /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kuo, Justin H. (Justin Hans) 1980. "An XML messaging protocol for multimodal galaxy applications." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shui, William Miao Computer Science &amp Engineering Faculty of Engineering UNSW. "On Efficient processing of XML data and their applications." Awarded by:University of New South Wales. Computer Science & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40502.

Full text
Abstract:
The development of high-throughput genome sequencing and protein structure determination techniques have provided researchers with a wealth ofbiological data. However, providing an integrated analysis can be difficult due to the incompatibilities of data formats between providers and applications, the strict schema constraints imposed by data providers, and the lack ofinfrastructure for easily accommodating new semantic information. To address these issues, this thesis first proposes to use Extensible Markup Language (XML) [26] and its supporting query languages as the underlying technology to facilitate a seamless, integrated access to the sum of heterogeneous biological data and services. XML is used due to its semi-structured nature and its ability to easily encapsulate both contextual and semantic information. The tree representation of an XML document enables applications to easily traverse and access data within the document without prior knowledge of its schema. However, in the process ofconstructing the framework, we have identified a number of issues that are related to the performance ofXML technologies. More specifically, on the performance ofthe XML query processor, the data store and the transformation processor. Hence, this thesis also focuses on finding new solutions to address these issues. For the XML query processor, we proposes an efficient structural join algorithm that can be implemented on top of existing relational databases. Experiments show the proposed method outperforms previous work in both queries and updates. For complicated XML query patterns, a new twig join algorithm called CTwigStack is proposed in this thesis. In essence, the new approach only produces and merges partial solution nodes that satisfy the entire twig query pattern tree. Experiments show the proposed algorithm outperforms previous methods in most cases. For more general cases, a propose a mixed mode twig join is proposed, which combines CTwigStack with the existing twig join algorithms and the extensive experimental results have shown the superior effectiveness of both CTwigStack and the mixed mode twig join. By combining with existing system information, the mixed mode twig join can be served as a framework for plan selection during the process of XML query optimization. For the XML transfonnation component, a novel stand-alone, memory conscious XSLT processor is proposed in this thesis, such that the proposed XSLT processor only requires a single pass of the input XML dataset. Consequently, enabling fast transfonnation of streaming XML data and better handling of complicated XPath selection patterns, including aggregate predicate functions such as the XPath count function. Ultimately, based on the nature of the proposed framework, we believe that solving the perfonnance issues related to the underlying XML components can subsequently lead to a more robust framework for integrating heterogeneous biological data sources and services.
APA, Harvard, Vancouver, ISO, and other styles
5

Jacobs, David. "An XML Based Authorization Framework for Web-based Applications." NSUWorks, 2001. http://nsuworks.nova.edu/gscis_etd/607.

Full text
Abstract:
The World Wide Web is increasingly being used to deliver services. The file based authorization schemes originally designed into web servers are woefully inadequate for enforcing the security policies needed by these services. This has led to the chaotic situation where each application is forced to develop its own security framework for enforcing the policies it requires. In tum, this has led to more numerous security vulnerabilities and greater maintenance headaches. This dissertation lays out an authorization framework that enforces a wide range of security policies crucial to many web-based business applications. The solution is described in three steps. First, it specifies the stakeholders in an authorization system, the roles they play, and the crucial authorization policies that web applications commonly require. Secondly, it maps out the design of the XML based authorization language (AZML), showing how it provides for maintenance to be divided into proscribed roles and for the expression of required policies. Lastly, it demonstrates through a scenario the use of the XML authorization language for enforcing policies in a web-based application. It also explores the issues of how maintenance should be handled, what would be required to scale the authorization service and how to more tightly couple the authorization service to the web server.
APA, Harvard, Vancouver, ISO, and other styles
6

Carcenac, Julien-Laurent. "Motifs arborescents pour données semi-structureés XML : compilation et applications." Université de Marne-la-Vallée, 2006. http://www.theses.fr/2006MARN0312.

Full text
Abstract:
La quantité de données disponibles au format XML, en tant que fichiers ou à travers les services web, pose le problème de sa manipulation. Exalead, société éditrice de logiciels de recherche, a choisi de développer pour ses propres besoins un langage de programmation "orienté-XML", le langage ExaScript. Ce langage unifie le modèle objet des langages de programmation impératifs et le modèle XML. En considérant les documents XML comme des objets, des manipulations de base viennent naturellement : construction d'un objet, accès et modification d'un champ. . . Toutefois, le paradigme de programmation impérative ne possède pas de primitive de manipulation avancée pour les objets complexes comme les arborescences XML. L'appariement de motif nous a paru le mécanisme le plus adapté pour exprimer des contraintes sur les objets XML et en sélectionner des sous-parties. La capacité de manipulation repose alors sur la simplicité de ces motifs et sur leur expressivité. Les contraintes imposées par ces motifs se doivent de capturer l'"essence" du XML en prenant en considération ses différents aspects : à la fois document textuel, arborescence étiquetée, chaîne de caractères. Cette thèse propose une algèbre de motifs arborescents adaptée au traitement des données semi-structurées XML. Cette algèbre a pour particularité d'unifier plusieurs aspects : lexical, grammatical, structurel et booléen. Nous établissons un schéma de compilation hiérarchique fondé sur des structures compilées simples : les évaluateurs booléens, les automates de caractères et une variante des automates classiques, les automates de classes d'identifiants. Nous présentons différentes applications réalisées à partir de notre algèbre de motifs et leurs implications sur les systèmes de recherche. Plusieurs applications de traitement du langage naturel, comme l'appariement de motifs linguistiques ou les outils de veille, peuvent être construites à partir d'un sous-ensemble de notre algèbre. Enfin, nous présentons l'intégration de cette algèbre dans le langage ExaScript, ainsi que son utilisation à des fins de détourage de pages interne
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yaxuan. "Checking Metadata Usage for Enterprise Applications." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103425.

Full text
Abstract:
It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effec- tiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers.<br>Master of Science<br>It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effec- tiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers.
APA, Harvard, Vancouver, ISO, and other styles
8

Xue, Xiaohui. "Génération et adaptation automatiques de mappings pour des sources de données XML." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00324429.

Full text
Abstract:
L'intégration de l'information fournie par de multiples sources de données hétérogènes est un besoin croissant des systèmes d'information actuels. Dans ce contexte, les besoins des applications sont décrits au moyen d'un schéma cible et la façon dont les instances du schéma cible sont dérivées à partir des sources de données est exprimée par des mappings. Dans cette thèse, nous nous intéressons à la génération automatique de mappings pour des sources de données XML ainsi qu'à l'adaptation de ces mappings en cas de changements survenant dans le schéma cible ou dans les sources de données. <br />Nous proposons une approche de génération de mappings en trois phases : (i) la décomposition du schéma cible en sous-arbres, (ii) la recherche de mappings partiels pour chacun de ces sous-arbres et enfin (iii) la génération de mappings pour l'ensemble du schéma cible à partir de ces mappings partiels. Le résultat de notre approche est un ensemble de mappings, chacun ayant une sémantique propre. Dans le cas où l'information requise par le schéma cible n'est pas présente dans les sources, aucun mapping ne sera produit. Dans ce cas, nous proposons de relaxer certaines contraintes définies sur le schéma cible pour permettre de générer des mappings. Nous avons développé un outil pour supporter notre approche. Nous avons également proposé une approche d'adaptation des mappings existants en cas de changement survenant dans les sources ou dans le schéma cible.
APA, Harvard, Vancouver, ISO, and other styles
9

Fila, Barbara. "Automates pour l'analyse de documents XML compressés, applications à la sécurité d'accès." Phd thesis, Université d'Orléans, 2008. http://tel.archives-ouvertes.fr/tel-00491193.

Full text
Abstract:
Le problème de l'extraction d'information dans des documents semi-structurés, du type XML, constitue un des plus importants domaines de la recherche actuelle en informatique. Il a généré un grand nombre de travaux tant d'un point de vue pratique, que d'un point de vue théorique. Dans ce travail de thèse, notre étude porte sur deux objectifs : 1. évaluation des requêtes sur un document assujetti à une politique de contrôle d'accès, 2. évaluation des requêtes sur un document pouvant être partiellement ou totalement compressé. Notre étude porte essentiellement sur l'évaluation des requêtes unaires, càd. sélectionnant un ensemble des noeuds du document qui satisfont les propriétés spéciées par la requête. Pour exprimer les requêtes, nous utilisons le XPath le principal langage de sélection dans les documents XML. Grâce à ses axes navigationels, et ses ltres qualicatifs, XPath permet la navigation dans des documents XML, et la sélection des noeuds répondant à la requête. Les expressions XPath sont à la base de plusieurs formalismes de requêtes comme XQuery, XSLT, ils permettent également de dénir les clés d'accès dans XML Schema et XLink, et de référencer les éléments d'un document externe dans XPointer.
APA, Harvard, Vancouver, ISO, and other styles
10

Schuhart, Henrike. "Design and implementation of a database programming language for XML-based applications." Berlin Aka, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2890794&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fila-Kordy, Barbara. "Automates pour l'analyse de documents XML compressés, applications à la sécurité d'accès." Orléans, 2008. http://www.theses.fr/2008ORLE2029.

Full text
Abstract:
Le problème d'extraction d'information dans les documents XML est un domaine de recherche important en informatique. Nous proposons deux approches pour l'évaluation de requêtes sur les documents XML, compressés ou non, et/ou assujettis à des politiques du contrôle d'accès. La première est basée sur la logique de Horn: les politiques du contrôle d'accès de même que les requêtes sont modélisées comme des clauses de Horn contraintes, et l'évaluation de requêtes s'appuie sur une technique de résolution adaptée. La deuxième approche que nous présentons vise les documents XML pouvant être présentés sous une forme compressée comme des dags. Cette approche est basée sur sept automates de mots, correspondant aux sept axes de base de XPath. Ils courent de façon top-down sur les dags, et permettent d'évaluer les requêtes sur les documents compressés sans devoir les décompresser. Cette technique d'évaluation d'une requête Q, sur un document compressé t, peut être adaptée de façon à fournir la même réponse que l'évaluation de Q sur l'arbre équivalent à t. La complexité de nos deux approches est linéaire par rapport au nombre d'arêtes du document, et la taille de la requête. Outre ces deux approches, nous avons également présenté une approche basée sur les techniques de réécriture qui permet de résoudre le problème d'inclusion de patterns, sur un fragment purement descendant de XPath ; nous déduisons ainsi une condition nécessaire et suffisante pour ce problème (pour les patterns de ce fragment). Cette vue réécriturale peut être adaptée de façon à fournir une technique d'évaluation pour les requêtes modélisées par ces patterns, sur les documents XML arborescents ou compressés comme des dags.
APA, Harvard, Vancouver, ISO, and other styles
12

Zou, Li. "A New Architecture for Developing Component-based Distributed Applications." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin974951548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hamilton, John, Ronald Fernandes, Mike Graul, and Charles H. Jones. "APPLICATIONS OF A HARDWARE SPECIFICATION FOR INSTRUMENTATION METADATA." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604519.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>In this paper, we discuss the benefits of maintaining a neutral-format hardware specification along with the telemetry metadata specification. We present several reasons and methods for maintaining the hardware specifications, as well as several potential uses of hardware specification. These uses include cross-validation with the telemetry metadata and automatic generation of both metadata and instrumentation networks.
APA, Harvard, Vancouver, ISO, and other styles
14

Lande, Daniel Ross. "Implementation of an XML-based user interface with applications in ice sheet modeling." The University of Montana, 2009. http://etd.lib.umt.edu/theses/available/etd-12152008-222224/.

Full text
Abstract:
The scientific domain presents unique challenges to software developers. This thesis describes the application of design patterns to the problem of dynamically changing interfaces to scientific application software (GLIMMER, which performs ice sheet modeling). In its present form, GLIMMER uses a text configuration file to define model behavior, set parameters, and structure model input/output (I/O). The creation of the configuration file presents a significant problem to users due to its format and complexity. GLIMMER is still under development, and the number of changes to configuration parameters, parameter types, and parameter dependencies makes devel-opment of any single interface of use only for a short term. The application of design patterns described here resulted in an interface specification tool that then generates multiple versions of a user interface usable across a wide variety of configuration pa-rameter types, values, and dependencies. The resulting products have leveraged de-sign patterns and solved problems associated with design pattern usage not found in the specialized software engineering literature.
APA, Harvard, Vancouver, ISO, and other styles
15

Sulewski, Joe, John Hamilton, Timothy Darr, and Ronald Fernandes. "Web Service Applications in Future T&E Scenarios." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605923.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California<br>In this paper, we discuss ways in which web services can be used in future T&E scenarios, from the initial hardware setup to making dynamic configuration changes and data requests. We offer a comparison of this approach to other standards such as SNMP, FTP, and RTSP, describing the pros and cons of each as well as how these standards can be used together for certain applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Armold, Adrian D. "XML tactical chat (XTC) extensible messaging and presence protocol for command and control applications." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FArmold.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2006.<br>Thesis Advisor(s): Don Brutzman, Don McGregor. "September 2006." Includes bibliographical references. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
17

Situ, Qihua Gina. "TaMeX, a task-structure based mediation architecture for integration of Web applications using XML." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60498.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hellström, Adrian. "Querying JSON and XML : Performance evaluation of querying tools for offline-enabled web applications." Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-5915.

Full text
Abstract:
This article explores the viability of third-party JSON tools as an alternative to XML when an application requires querying and filtering of data, as well as how the application deviates between browsers. We examine and describe the querying alternatives as well as the technologies we worked with and used in the application. The application is built using HTML 5 features such as local storage and canvas, and is benchmarked in Internet Explorer, Chrome and Firefox. The application built is an animated infographical display that uses querying functions in JSON and XML to filter values from a dataset and then display them through the HTML5 canvas technology. The results were in favor of JSON and suggested that using third-party tools did not impact performance compared to native XML functions. In addition, the usage of JSON enabled easier development and cross-browser compatibility. Further research is proposed to examine document-based data filtering as well as investigating why performance deviated between toolsets.
APA, Harvard, Vancouver, ISO, and other styles
19

Darr, Timothy, Ronald Fernandes, Michael Graul, John Hamilton, and Charles H. Jones. "Automated Configuration and Validation of Instrumentation Networks." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606234.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California<br>This paper describes the design and implementation of a test instrumentation network configuration and verification system. Given a multivendor instrument part catalog that contains sensor, actuator, transducer and other instrument data; user requirements (including desired measurement functions) and technical specifications; the instrumentation network configurator will select and connect instruments from the catalog that meet the requirements and technical specifications. The instrumentation network configurator will enable the goal of mixing and matching hardware from multiple vendors to develop robust solutions and to reduce the total cost of ownership for creating and maintaining test instrumentation networks.
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Robert C. 1979. "The encapsulation of legacy binaries using and XML-based approach with applications in ocean forecasting." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16963.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.<br>Includes bibliographical references (p. 85-87).<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>This thesis presents an XML-based approach for the encapsulation of legacy binaries. A method that utilizes XML documents to describe the various parameters and settings for the compilation and execution of an encapsulated binary is discussed. The binary is treated as a black-box component and the XML description for that binary contains relevant restrictions, such as input and output files and runtime parameters read in from the standard input stream. The proposed XML schema design constrains the aforementioned XML descriptions of binaries. The usage parameters for the binaries are expressed by such XML documents. A prototype system is then able to take any of these schema-conforming XML descriptions and display the relevant user controls in a graphical user interface (GUI). Instead of editing obscure script files, the user can make changes to build-time and runtime parameters for a binary using the presented system interface. After validating the user inputs, the system generates the required script files automatically and proceeds to compile and/or execute the binary. The Primary Equation Model binary of the Harvard Ocean Prediction System (HOPS) was successfully encapsulated using the presented approach. The customization and control of the binary's compilation and execution through a GUI was achieved.<br>by Robert C. Chang.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
21

Lemos, Fernando Cordeiro de. "Usando Assertivas de CorrespondÃncia para EspecificaÃÃo e GeraÃÃo de VisÃes XML para AplicaÃÃes Web." Universidade Federal do CearÃ, 2007. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1404.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior<br>AplicaÃÃes Web que possuem grande nÃmero de pÃginas, cujos conteÃdos sÃo dinamicamente extraÃdos de banco de dados, e que requerem intenso acesso e atualizaÃÃo dos dados, sÃo conhecidas como âdata-intensive Web applicationsâ (aplicaÃÃes DIWA). Neste trabalho, os requisitos de conteÃdo de cada pÃgina da aplicaÃÃo sÃo especificados atravÃs de uma visÃo XML, a qual denominamos VisÃo de NavegaÃÃo (VN). Consideramos que os dados das VNs estÃo armazenados em um banco de dados relacional ou XML. Nesse trabalho, propomos um enfoque para especificaÃÃo e geraÃÃo de VNs para aplicaÃÃes Web cujo conteÃdo à extraÃdo de uma ou mais fontes de dados. No enfoque proposto, uma VN à especificada conceitualmente com a ajuda de um conjunto de Assertivas de CorrespondÃncia, de forma que a definiÃÃo da VN pode ser gerada automaticamente a partir das assertivas da visÃo.<br>Web applications that have large number of pages, whose contents are dynamically extracted from one or more databases, and that requires data intensive access and update, are known as "data-intensive Web applications" (DIWA applications) [7]. In this work, the requirements for the content of each page of the application are specified by an XML view, which is called Navigation View (NV). We believe that the data of NVs are stored in a relational or XML database. In this work, we propose an approach to specify and generate NVs for Web applications whose content is extracted from one or more data sources. In the proposed approach, a NV is specified conceptually with the help of a set of Correspondence Assertions [44], so that the definition of NV can be generated automatically based on assertions of view.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Yu. "Context-aware applications for a Pocket PC." Thesis, KTH, Kommunikationssystem, CoS, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91938.

Full text
Abstract:
With the rapid development of technology for context awareness, pervasive computing is releasing people from their traditional desktops. Since mobile devices feature portability and are (nearly) always connected, people tend to carry them wherever they go. Hence, devices such as cellular phones and Pocket PCs are the most suitable platforms for developing context aware applications which users will utilize in their daily life. For these context aware systems, using this context information not only improves the user experience of ubiquitous computing, but also lets the system know who you are or what you have. More importantly, the device can know where you are and predict what you might like to do, thus simplifying many of the user’s interactions with devices and other people around them. This thesis project involves the design, implementation and evaluation of a context aware application, based upon a Pocket PC, that can remind the user of tasks when the user approaches the relevant location for this task. The application interacts with a context aware infrastructure by using the SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE) protocol, receives context information for the user described using XML. A number of new tags, based upon a new XML schema, have been introduced for this task. This context aware mechanism enables the user to receive any form of information updated by the context server. In this thesis, updates to this information are driven by changes in the user’s location. Additionally, by using the existing calendar application on the Pocket PC, the user can experience location based reminders without learning how to use a new user interface.<br>Med den snabba utvecklingen av kontextmedvetna teknologier befriar den genomträngande datoriseringen människor från deras traditionella datorer. Eftersom mobila apparater medför bärbarhet och är (nästan) alltid uppkopplade, tenderar människor att bära dem överallt. Följaktligen blir apparater som mobiltelefoner och Pocket-PC de mest passande plattformarna för utvecklandet avkontextmedvetna applikationer för daglig användning. För dessa kontextmedvetna system kommer inte bara användandet av kontexinformation förbättra användarens upplevelse av överallt förekommande datorisering, utan låter även systemet veta vem du är eller vad du har. ännu viktigare är att apparaten kan veta var du befinner dig samt förutsäga vad du skulle kunna vilja göra, och därigenom förenkla mycket av användarens interaktion med andra apparater och människor i omgivningen. Detta examensarbetsprojekt involverar designen, implementationen och evalueringen av en kontextmedvetet applikation, baserad på en Pocket-PC, som kan påminna användaren om uppgifter när användaren närmar sig det relevanta området för dessa uppgifter. Applikationen interagerar med en kontextmedveten infrastruktur genom användandet av protokollet “SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE)”, mottas kontextinformation för användaren beskriven i XML-format. Ett antal nya taggar, baserade på en ny XML-schema, har introducerats för denna uppgift. Denna kontextmedvetna mekanism gör det möjligt för användaren att ta emot alla typer av uppdaterad information från kontextservern. I denna avhandling uppdateras denna information genom att användaren förflyttar sig. Dessutom kan användaren, genom att använda den befintliga kalenderapplikationen i Pocket-PC:n, få lägesbaserade påminnelser skickade till sig utan att behöva lära sig använda ännu ett interface.
APA, Harvard, Vancouver, ISO, and other styles
23

Revels, Kenneth W. "Constraints of Migrating Transplant Information System's Legacy Data to an XML Format For Medical Applications Use." NSUWorks, 2001. http://nsuworks.nova.edu/gscis_etd/799.

Full text
Abstract:
This dissertation presents the development of two methodologies to migrate legacy data elements to an open environment. Changes in the global economy and the increasingly competitive business climate are driving companies to manage legacy data in new ways. Legacy data is used for strategic decisions as well as short-term decisions. Data migration involves replacing problematic hardware and software. The legacy data elements are being placed into different file formats then migrated to open system environments. The purpose of this study, was to develop migration methodologies to move legacy data to an XML format the techniques used for developing the intermediate delimited file and the XML schema involved the use of system development life cycles (SDLC) procedures. These procedures are part of the overall SDLC methodologies used to guide this project to a successful conclusion. SDLC procedures helped in planning, scheduling, and implementing of the project steps. This study presents development methodologies to create XML schemas which saves man-hours. XML technology is very flexible in that it can be published to many different platforms that are ODBC compliant and uses TCPIIP as its transport protocol. This study provides a methodology that steers the step-by-step migration of legacy information to an open environment. The incremental migration methodology was used to create and migrate the intermediate legacy data elements and the FAST methodology was used to develop the XML schema. As a result the legacy data can reside in a more efficient and useful data processing environment.
APA, Harvard, Vancouver, ISO, and other styles
24

Marinoiu, Bogdan-Eugen. "Monitoring of distributed applications in peer-to-peer systems." Paris 11, 2009. http://www.theses.fr/2009PA112053.

Full text
Abstract:
Les systèmes Pair-à-Pair sont devenus très à la mode ces dernières années. Cette thèse a été motivée premièrement par la nécessité de concevoir des outils pour analyser de tels systèmes. Elle propose un système de surveillance réparti, P2PMonitor, qui a des capteurs spécialisés pour la surveillance locale d'entités. Ces capteurs codent des événements basiques en documents (Active)XML. Une méthode a été proposée pour filtrer les flux, dont le but est d'éviter au maximum la matérialisation de l'information. De même, un algorithme est proposé pour détecter quelles parties d'une tâche nouvelle sont déjà réalisées par le système et dériver un plan physique de traitement qui en tient compte. Des processeurs de flux complexes et efficaces sont nécessaires au cœur de P2PMonitor. La thèse étudie leur construction à partir de vues sur des documents actifs et propose un algorithme de maintenance qui améliore le temps de calcul et passe à l'échelle. A l'origine, les processus business ont été principalement centrés sur les opérations, mais à présent, la notion d’artefact business a été proposée et semble bien adaptée pour la spécification de processus centrés données. Cette thèse propose un modèle pour les artefacts qui est basé sur les documents actifs et qui capture: leur état, évolution, interactions et historique. Les applications réparties à base d’artefact peuvent être surveillées et l'information obtenue stockée efficacement dans le système et indexée. Elle peut être interrogée par un langage simple et puissant, qui prend en compte des aspects temporels. P2PMonitor est illustré avec une application réelle: la chaine de fabrication de Dell<br>Peer to peer systems have gained momentum over the last years. This thesis was motivated by the necessity to design tools to analyze such systems. It proposes a distributed monitoring system, P2PMonitor, that has alerters specialized in local monitoring of entities. These alerters encode the basic events detected into (Active)XML documents. A method has been proposed for filtering the streams, whose purpose is to avoid as much as possible the materialization of information. Also, an algorithm has been proposed for the detection of parts of a new task that are already supported by the system. Moreover, the algorithm derives a processing plan for the task that takes them into account. Complex and efficient stream processors are needed at the heart of P2PMonitor. The thesis shows how to build them from views over active documents and proposes a maintenance algorithm that scales and reduces computation time. At their origin, the business processes have been mainly operation-centric, but recently, business artifacts have been proposed and seem well adapted for the specification of data-centric applications. The thesis proposes a model for the artifacts that is based on active documents and that captures: their state, evolution, interactions and history. The artifact-based applications can be monitored and the monitoring information obtained can be efficiently stored and indexed in the system, and then queried with a simple, yet powerful language that takes temporal aspects into consideration. The tools are tested with a pertinent class of distributed applications. P2PMonitor is illustrated with a real application: the Dell Supply Chain
APA, Harvard, Vancouver, ISO, and other styles
25

Colonna, François-Marie. "Intégration de données hétérogènes et distribuées sur le web et applications à la biologie." Aix-Marseille 3, 2008. http://www.theses.fr/2008AIX30050.

Full text
Abstract:
Depuis une vingtaine d'années, la masse de données générée par la biologie a cru de façon exponentielle. L'accumulation de ces informations a conduit à une hétérogénéité syntaxique et sémantique importante entre les sources. Intégrer ces données est donc devenu un des champs principaux de recherche en bases de données, puisque l'écriture de requêtes complexes joue un rôle important, en médecine prédictive par exemple. Les travaux présentés dans cette thèse se sont orientés autour de deux axes. Le premier axe s'intéresse à la jointure de données de source en source, qui automatise les extractions manuelles habituellement destinées à recouper les données. Cette méthode est basée sur une description des capacités des sources en logique des attributs. Le deuxième axe vise à développer une architecture de médiation BGLAV basée sur le modèle semi-structure, afin d'intégrer les sources de façon simple et flexible, en associant au système le langage XQuery<br>Over the past twenty years, the volume of data generated by genomics and biology has grown exponentially. Interoperation of publicly available or copyrighted datasources is difficult due to syntactic and semantic heterogeneity between them. Thus, integrating heterogeneous data is nowadays one of the most important field of research in databases, especially in the biological domain, for example for predictive medicine purposes. The work presented in this thesis is organised around two classes of integration problems. The first part of our work deals with joining data sets across several datasources. This method is based on a description of sources capabilities using feature logics. The second part of our work is a contribution to the development of a BGLAV mediation architecture based on semi-structured data, for an effortless and flexible data integration using the XQuery language
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Yuxiao. "XML-based Frameworks for Internet Commerce and an Implementation of B2B e-procurement." Licentiate thesis, Linköping University, Linköping University, PELAB - Programming Environment Laboratory, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5735.

Full text
Abstract:
<p>It is not easy to apply XML in e-commerce development for achieving interoperability in heterogeneous environments. One of the reasons is a multitude of XML-based Frameworks for Internet Commerce (XFIC), or industrial standards. This thesis surveys 15 frameworks, i.e., ebXML, eCo Framework, UDDI, SOAP, BizTalk, cXML, ICE, Open Applications Group, RosettaNet, Wf-XML, OFX, VoiceXML, RDF, WSDL and xCBL.</p><p>This thesis provides three models to systematically understand how the 15 frameworks meet the requirements of e-commerce. A hierarchical model is presented to show the purpose and focus of various XFIC initiatives. A relationship model is given to show the cooperative and competitive relationships between XFIC. A chronological model is provided to look at the development of XFIC. In addition, the thesis offers guidelines for how to apply XFIC in an e-commerce development.</p><p>We have also implemented a B2B e-procurement system. That not only demonstrates the feasibility of opensource or freeware, but also validates the complementary roles of XML and Java: XML is for describing contents and Java is for automating XML documents (session handling). Auction-based dynamic pricing is also realized as a feature of interest. Moreover, the implementation shows the suitability of e-procurement for educational purposes in e-commerce development.</p><br>Report code: LiU-Tek-Lic-2001:19.
APA, Harvard, Vancouver, ISO, and other styles
27

Lemos, Fernando Cordeiro de. "Usando Assertivas de Correspondência para Especificação e Geração de Visões XML para Aplicações Web." reponame:Repositório Institucional da UFC, 2007. http://www.repositorio.ufc.br/handle/riufc/17927.

Full text
Abstract:
LEMOS, Fernando Cordeiro de. Usando Assertivas de Correspondência para Especificação e Geração de Visões XML para Aplicações Web. 2007. 115 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2007.<br>Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-24T19:44:28Z No. of bitstreams: 1 2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5)<br>Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-24T19:47:37Z (GMT) No. of bitstreams: 1 2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5)<br>Made available in DSpace on 2016-06-24T19:47:37Z (GMT). No. of bitstreams: 1 2007_dis_fclemos.pdf: 1586971 bytes, checksum: d5add67ad3fb40e35813240332a35900 (MD5) Previous issue date: 2007<br>Web applications that have large number of pages, whose contents are dynamically extracted from one or more databases, and that requires data intensive access and update, are known as "data-intensive Web applications" (DIWA applications) [7]. In this work, the requirements for the content of each page of the application are specified by an XML view, which is called Navigation View (NV). We believe that the data of NVs are stored in a relational or XML database. In this work, we propose an approach to specify and generate NVs for Web applications whose content is extracted from one or more data sources. In the proposed approach, a NV is specified conceptually with the help of a set of Correspondence Assertions [44], so that the definition of NV can be generated automatically based on assertions of view.<br>Aplicações Web que possuem grande número de páginas, cujos conteúdos são dinamicamente extraídos de banco de dados, e que requerem intenso acesso e atualização dos dados, são conhecidas como “data-intensive Web applications” (aplicações DIWA). Neste trabalho, os requisitos de conteúdo de cada página da aplicação são especificados através de uma visão XML, a qual denominamos Visão de Navegação (VN). Consideramos que os dados das VNs estão armazenados em um banco de dados relacional ou XML. Nesse trabalho, propomos um enfoque para especificação e geração de VNs para aplicações Web cujo conteúdo é extraído de uma ou mais fontes de dados. No enfoque proposto, uma VN é especificada conceitualmente com a ajuda de um conjunto de Assertivas de Correspondência, de forma que a definição da VN pode ser gerada automaticamente a partir das assertivas da visão.
APA, Harvard, Vancouver, ISO, and other styles
28

Elsheh, Mohammed M. "Integration of relational database metadata and XML technology to develop an abstract framework to generate automatic and dynamic web entry forms." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3346.

Full text
Abstract:
Developing interactive web application systems requires a large amount of effort on designing database, system logic and user interface. These tasks are expensive and error-prone. Web application systems are accessed and used by many different sets of people with different backgrounds and numerous demands. Meeting these demands requires frequent updating for Web application systems which results in a very high cost process. Thus, many attempts have been made to automate, to some degree, the construction of Web user interfaces. Three main directions have been cited for this purpose. The first direction suggested of generating user interfaces from the application¿s data model. This path was able to generate the static layout of user interfaces with dynamic behaviour specified programmatically. The second tendency suggested deployment of the domain model to generate both, the layout of a user interface and its dynamic behaviour. Web applications built based on this approach are most useful for domain-specific interfaces with a relatively fixed user dialogue. The last direction adopted the notion of deploying database metadata to developing dynamic user interfaces. Although the notion was quite valuable, its deployment did not present a generic solution for generating a variety of types of dynamic Web user interface targeting several platforms and electronic devices. This thesis has inherited the latter direction and presented significant improvements on the current deployment of this tendency. This thesis aims to contribute towards the development of an abstract framework to generate abstract and dynamic Web user interfaces not targeted to any particular domain or platform. To achieve this target, the thesis proposed and evaluates a general notion for implementing a prototype system that uses an internal model (i.e. database metadata) in conjunction with XML technology. Database metadata is richer than any external model and provides the information needed to build dynamic user interfaces. In addition, XML technology became the mainstream of presenting and storing data in an abstract structure. It is widely adopted in Web development society because of its ability to be transformed into many different formats with a little bit of effort. This thesis finds that only Java can provide us with a generalised database metadata based framework. Other programming languages apply some restrictions on accessing and extracting database metadata from numerous database management systems. Consequently, JavaServlets and relational database were used to implement the proposed framework. In addition, Java Data Base Connectivity was used to bridge the two mentioned technologies. The implementation of our proposed approach shows that it is possible and very straightforward to produce different automatic and dynamic Web entry forms that not targeted at any platform. In addition, this approach can be applied to a particular domain without affecting the main notion or framework architecture. The implemented approach demonstrates a number of advantages over the other approaches based on external or internal models.
APA, Harvard, Vancouver, ISO, and other styles
29

Elsheh, Mohammed Mosbah. "Integration of relational database metadata and XML technology to develop an abstract framework to generate automatic and dynamic web entry forms." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3346.

Full text
Abstract:
Developing interactive web application systems requires a large amount of effort on designing database, system logic and user interface. These tasks are expensive and error-prone. Web application systems are accessed and used by many different sets of people with different backgrounds and numerous demands. Meeting these demands requires frequent updating for Web application systems which results in a very high cost process. Thus, many attempts have been made to automate, to some degree, the construction of Web user interfaces. Three main directions have been cited for this purpose. The first direction suggested of generating user interfaces from the application's data model. This path was able to generate the static layout of user interfaces with dynamic behaviour specified programmatically. The second tendency suggested deployment of the domain model to generate both, the layout of a user interface and its dynamic behaviour. Web applications built based on this approach are most useful for domain-specific interfaces with a relatively fixed user dialogue. The last direction adopted the notion of deploying database metadata to developing dynamic user interfaces. Although the notion was quite valuable, its deployment did not present a generic solution for generating a variety of types of dynamic Web user interface targeting several platforms and electronic devices. This thesis has inherited the latter direction and presented significant improvements on the current deployment of this tendency. This thesis aims to contribute towards the development of an abstract framework to generate abstract and dynamic Web user interfaces not targeted to any particular domain or platform. To achieve this target, the thesis proposed and evaluates a general notion for implementing a prototype system that uses an internal model (i.e. database metadata) in conjunction with XML technology. Database metadata is richer than any external model and provides the information needed to build dynamic user interfaces. In addition, XML technology became the mainstream of presenting and storing data in an abstract structure. It is widely adopted in Web development society because of its ability to be transformed into many different formats with a little bit of effort. This thesis finds that only Java can provide us with a generalised database metadata based framework. Other programming languages apply some restrictions on accessing and extracting database metadata from numerous database management systems. Consequently, JavaServlets and relational database were used to implement the proposed framework. In addition, Java Data Base Connectivity was used to bridge the two mentioned technologies. The implementation of our proposed approach shows that it is possible and very straightforward to produce different automatic and dynamic Web entry forms that not targeted at any platform. In addition, this approach can be applied to a particular domain without affecting the main notion or framework architecture. The implemented approach demonstrates a number of advantages over the other approaches based on external or internal models.
APA, Harvard, Vancouver, ISO, and other styles
30

Essid, Mehdi. "Intégration des données et applications hétérogènes et distribuées sur le Web." Aix-Marseille 1, 2005. http://www.theses.fr/2005AIX11035.

Full text
Abstract:
Cette thèse se situe dans le cadre général de l'intégration de données hétérogènes sur le Web et plus particulièrement dans le cadre applicatif du domaine géographique. Nous avons orienté nos travaux selon deux directions : 1 - un axe "modèle structuré" : nous avons proposé l'algorithme de réécriture Grouper/Diviser dans le cas de présence de clés. Ensuite, nous avons construit un système de médiation relationnel pour les SIG implantant cet algorithme. L'originalité de notre système est l'extension des capacités manquantes dans les sources par l'intégration d'outils. 2 - un axe "modèle semi-structuré" : nous avons utilisé la même stratégie que Grouper/Diviser afin de proposer un algorithme de réécriture dans un cadre XML. Ensuite, nous avons construit le système VirGIS : un système de médiation conforme à des standards. Grâce à sa conformité aux standards de l'OpenGIS et du W3C, nous avons déployé notre système dans le cadre géographique tout en restant générique à d'autres domaines
APA, Harvard, Vancouver, ISO, and other styles
31

Eslami, Mohammad Zarifi. "A Presence Server for Context-aware Applications." Thesis, KTH, Kommunikationssystem, CoS, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91937.

Full text
Abstract:
This master’s thesis project “A Presence Server for Context-aware Applications” was carried out at KTH Center for Wireless Systems (Wireless@KTH). The overall goal of this thesis project is to implement a context aware infrastructure to serve as middleware for different kinds of context aware applications, such as a context-aware printing application, location based notifier application, etc. This thesis examines different types of context aware architectures and considers different forms of context modeling. Additionally the thesis also explores some of the related technology, in order to provide the reader with suitable background information to understand the rest of the thesis. By using the SIP Express Router (SER) and its presence module (pa) a context server has been designed, implemented, and evaluated. Evaluation reveals that the critical bottleneck is the increasing service time as the number of Publish messages for different events in the SER database increases, i.e. the time required for handling and sending the Notify messages when a new Publish message is received increases as a function of the number of earlier Publish messages. The evaluation also shows that the dependence of SER upon the MySQL database as incorrect database queries can cause SER to crash. Additionally the performance of the database limits the performance of the context server. A number of future improvements are necessary to address security issues (in particular the authentication of Watchers) and adding policy based control in order to send Notify messages only to the Watchers authorized to receive information for a specific event.<br>Examensarbetet "A Presence Server for Context-aware Applications" genomfördes på Kungliga Tekniska Högskolan, KTH Center for Wireless Systems (Wireless@KTH). Det övergripande målet med detta examensarbetsprojekt är att implementera en kontextmedveten infrastruktur som fungerar som "middleware" för olika typer av kontextmedvetna applikationer. Exempel på dessa är kontextmedveten utskriftsapplikation och platsberoende meddelarapplikation osv. Rapporten undersöker olika typer av kontextmedvetna arkitekturer och betraktar olika former av kontextmodellering. Rapporten utforskar även vissa besläktade teknologier för att kunna tillhandahålla läsaren med en passande bakgrundsinformation och därmed öka förståelsen för resten av examensarbetet. Genom att använda Sip Express Routern (SER) och dess närvaromodul (presence module, PA) har en kontextserver designats, implementerats och utvärderats. Utvärderingen visar att den kritiska flaskhalsen är tiden det tar för SER servern att svara på nya Publish meddelanden, för olika händelser, i SER databasen. Svarstiden ökar allteftersom databasen fylls med mer data. Detta påverkar hantering och sänding av Notify meddelande när en ny Publish meddelande är mottagen. Uvärderingen visar också att en viktig fråga är relationen mellan SER servern och MySQL databasen, eftersom felaktiga förfrågningar till databasen kan krascha SER servern. De viktigaste framtida förbättringarna är säkerhetsaspekter (mer specifikt autenticering av Watchers) och tillägg av policybaserad sändning av Notify meddelanden endast till auktoriserade Watchers för specifika händelser.
APA, Harvard, Vancouver, ISO, and other styles
32

Aklouf, Youcef. "Intégration du modèle d'ontologie PLIB et des services web dans les échanges inter-entreprises : applications au business to business (B2B)." Poitiers, 2007. http://www.theses.fr/2007POIT2275.

Full text
Abstract:
De nouveaux modèles et standards de commerce électronique ont été développés ces dernières années particulièrement pour le commerce de type B2B (Business-to-Business). Ces modèles décrivent et formalisent les processus métier gérant les collaborations entre partenaires dans un échange commercial. En parallèle, de nouveaux modèles de caractérisation des données de produits se développent pour représenter les produits complexes des entreprises industrielles. Le modèle PLIB (ISO-13584) représente implicitement les composants répétitifs par l’intermédiaire de contraintes décrivant les composants de chaque famille. Actuellement, les standards du commerce électronique dans ses deux types (vertical, horizontal) manipulent et décrivent les données de produits de façon insuffisante. De plus, les différents modèles de représentation et de caractérisation de données de produits de la communauté Ingénierie des Connaissances décrivent techniquement et de façon très précise les données de produits dans des catalogues électroniques. Ces modèles à leur tour, souffrent de l’absence de protocoles d’affaires gérant les activités commerciales. L’objet de cette thèse consiste à définir un modèle d’échange pour le commerce électronique professionnel (B2B) intégrant les ontologies de domaine, les modèles de processus métier et le paradigme des Services Web. Ce modèle supporte les mécanismes permettant d’assurer cette orthogonalité, c’est-à-dire permettant d’intégrer des modèles de contenu avec ceux des processus métier.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Feng. "Platform Independent Real-Time X3D Shaders and their Applications in Bioinformatics Visualization." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/24.

Full text
Abstract:
Since the introduction of programmable Graphics Processing Units (GPUs) and procedural shaders, hardware vendors have each developed their own individual real-time shading language standard. None of these shading languages is fully platform independent. Although this real-time programmable shader technology could be developed into 3D application on a single system, this platform dependent limitation keeps the shader technology away from 3D Internet applications. The primary purpose of this dissertation is to design a framework for translating different shader formats to platform independent shaders and embed them into the eXtensible 3D (X3D) scene for 3D web applications. This framework includes a back-end core shader converter, which translates shaders among different shading languages with a middle XML layer. Also included is a shader library containing a basic set of shaders that developers can load and add shaders to. This framework will then be applied to some applications in Biomolecular Visualization.
APA, Harvard, Vancouver, ISO, and other styles
34

Exposito, Garcia Ernesto José. "Spécification et mise en oeuvre d'un protocole de transport orienté Qualité de Service pour les applications multimédias." Toulouse, INPT, 2003. http://www.theses.fr/2003INPT028H.

Full text
Abstract:
De nos jours, des applications multimédia présentant de fortes contraintes temporelles, de bande passante et de synchronisation multimédia ont été conçues. Ces applications sont aussi capables de tolérer un service de communication imparfait (c'est-à-dire un service partiellement ordonné et/ou partiellement fiable). Cependant, les services de communication actuellement disponibles au niveau transport ou réseaux ne satisfont pas entièrement ces besoins complexes de Qualité de Service (QdS). Cette thèse propose la conception d'un protocole de transport de nouvelle génération orienté QdS : QoSTP (Quality of Service oriented Transport Protocol). Ce protocole a été conçu afin de fournir un ensemble important de mécanismes de transport répondant précisément aux exigences applicatives, tout en utilisant les ressources et les services réseaux disponibles. En outre, QoSTP a été spécifié dans le cadre d'un contexte standard de QdS capable de fournir un espace sémantique extensible et une architecture composable. Ce principe de modélisation facilite l'extension et la spécialisation du protocole pour répondre à un grand nombre de besoins applicatifs tout en tenant compte de l'ensemble des services offerts par le système de communication. La conception de QoSTP a suivi une méthodologie spécialisée basée sur le procédé unifié de développement ou Unified Development Process et sur l'utilisation des langages UML et SDL. Cette méthodologie propose trois étapes de modélisation : la définition d'un modèle global, la définition technique du service transport et la spécification comportementale du protocole. Des scénarios spécifiques de simulation ont été établis afin de valider la spécification. Des expériences mettant en œuvre des applications multimédia standards ont permis d'évaluer les performances de ce protocole. Une étude proposant une méthodologie de déploiement basée sur des nœuds programmables dans le cadre d'architectures multimédia distribuées a aussi été réalisée.
APA, Harvard, Vancouver, ISO, and other styles
35

Semenski, Vedran. "An ABAC framework for IoT applications based on the OASIS XACML standard." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/18493.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática<br>A IoT (Internet of Things) é uma área que apresenta grande potencial mas embora muitos dos seus problemas já terem soluções satisfatórias, a segurança permanece um pouco esquecida, mantendo-se um como questão ainda por resolver. Um dos aspectos da segurança que ainda não foi endereçado é o controlo de acessos. O controlo de acesso é uma forma de reforçar a segurança que envolve avaliar os pedidos de acesso a recursos e negar o acesso caso este não seja autorizado, garantindo assim a segurança no acesso a recursos críticos ou vulneráveis. O controlo de Acesso é um termo lato, existindo diversos modelos ou paradigmas possíveis, dos quais os mais significativos são: IBAC (Identity Based Access Control), RBAC (Role Based Access Control) and ABAC (Attribute Based Access Control). Neste trabalho será usado o ABAC, já que oferece uma maior flexibilidade comparativamente a IBAC e RBAC. Além disso, devido à sua natureza adaptativa o ABAC tem maior longevidade e menor necessidade de manutenção. A OASIS (Organization for the Advancement of Structured Information Standards) desenvolveu a norma XACML (eXtensible Access Control Markup Language) para escrita/definição de políticas de acesso e pedidos de acesso, e de avaliação de pedidos sobre conjuntos de políticas com o propósito de reforçar o controlo de acesso sobre recursos. O XACML foi definido com a intenção de que os pedidos e as políticas fossem de fácil leitura para os humanos, garantindo, porém, uma estrutura bem definida que permita uma avaliação precisa. A norma XACML usa ABAC. Este trabalho tem o objetivo de criar uma plataforma de segurança que utilize os padrões ABAC e XACML que possa ser usado por outros sistemas, reforçando o controlo de acesso sobre recursos que careçam de proteção, e garantindo acesso apenas a sujeitos autorizadas. Vai também possibilitar a definição fina ou granular de regras e pedidos permitindo uma avaliação com maior precisão e um maior grau de segurança. Os casos de uso principais são grandes aplicações IoT, como aplicações Smart City, que inclui monitorização inteligente de tráfego, consumo de energia e outros recursos públicos, monitorização pessoal de saúde, etc. Estas aplicações lidam com grandes quantidades de informação (Big Data) que é confidencial e/ou pessoal. Existe um número significativo de soluções NoSQL (Not Only SQL) para resolver o problema do volume de dados, mas a segurança é ainda uma questão por resolver. Este trabalho vai usar duas bases de dados NoSQL: uma base de dados key-value (Redis) para armazenamento de políticas e uma base de dados wide-column (Cassandra) para armazenamento de informação de sensores e informação de atributos adicionais durante os testes.<br>IoT (Internet of Things) is an area which offers great opportunities and although a lot of issues already have satisfactory solutions, security has remained somewhat unaddressed and remains to be a big issue. Among the security aspects, we emphasize access control. Access Control is a way of enforcing security that involves evaluating requests for accessing resources and denies access if it is unauthorised, therefore providing security for vulnerable resources. Access Control is a broad term that consists of several methodologies of which the most significant are: IBAC (Identity Based Access Control), RBAC (Role Based Access Control) and ABAC (Attribute Based Access Control). In this work ABAC will be used as it offers the most flexibility compared to IBAC and RBAC. Also, because of ABAC's adaptive nature, it offers longevity and lower maintenance requirements. OASIS (Organization for the Advancement of Structured Information Standards) developed the XACML (eXtensible Access Control Markup Language) standard for writing/defining requests and policies and the evaluation of the requests over sets of policies for the purpose of enforcing access control over resources. It is defined so the requests and policies are readable by humans but also have a well defined structure allowing for precise evaluation. The standard uses ABAC. This work aims to create a security framework that utilizes ABAC and the XACML standard so that it can be used by other systems and enforce access control over resources that need to be protected by allowing access only to authorised subjects. It will also allow for fine grained defining of rules and requests for more precise evaluation and therefore a greater level of security. The primary use-case scenarios are large IoT applications such as Smart City applications including: smart traffic monitoring, energy and utility consumption, personal healthcare monitoring, etc. These applications deal with large quantities (Big Data) of confidential and/or personal data. A number of NoSQL (Not Only SQL) solutions exist for solving the problem of volume but security is still an issue. This work will use two NoSQL databases. A key-value database (Redis) for the storing of policies and a wide-column database (Cassandra) for storing sensor data and additional attribute data during testing.
APA, Harvard, Vancouver, ISO, and other styles
36

Hacklin, Fredrik August. "A 3G Convergence Strategy for Mobile Business Middleware Solutions : Applications and Implications." Thesis, KTH, Mikroelektronik och Informationsteknik, IMIT, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93278.

Full text
Abstract:
Mobile business solutions are one of the most attractive market segments of mobile information services. The third generation of mobile communication systems (3G) will be a significant step forward in the convergence of telecommunications and datacommunications industries. More specifically, the convergence of mobile technologies and the Internet allows compelling possibilities for future applications and solutions. However, most current mobile businesses and mobile application and solution providers are rather contributing to the process of convergence; many current ideas and solutions are based on the restrictions of existing mobile networks combined with Internet-based services. In the future, when mobile networks and the Internet have merged, it will no longer be possible to create revenue with these types of solutions. One concrete solution is the mobile middleware concept, bridging the mobile technologies and Internet world. This Master’s thesis studies the middleware concept for providing business applications in the light of 3G, making strategic recommendations to a provider of these kinds of services. A comprehensive discussion about the developments after 3G is introduced. Alternative solutions are presented and some strategic implications are introduced. The implications are motivated by an industry survey, carried out within this project. The topic of over-the-air data synchronization is discussed as an example for interim middleware. Mobile computing file system issues are seen as an interesting opportunity for business applications. The possibility of remote desktop screen access is studied, and measurements proving its feasability for hosted wireless application service provision are made. Emerging mobile Java technologies are discussed as an efficient platform for providing ubiquitous, device independent end-to-end solutions. As one of the recommended strategies, this thesis introduces the concept of hybrid thickness client applications as a feasible solution for migrating from current middleware solutions to an (uncertain) future of native, thick terminal applications, within a scope of two years. Based on this concept, a prototype for a 3G smartphone application was developed as an example. A set of possible strategic scenarios is presented and discussed. This thesis also discusses operator differentiation and business solutions in an all-IP based world. 3G networks and handset devices will introduce a large number of new applications and business opportunities, but such a change will also introduce new challenges and risks. The migration challenge is being illustrated in the case of Smartner, a mobile middleware solution provider focusing on business applications. As shown by this case, compared to current enabling solutions, a major shift in technologies is seen as needed, in order to maintain long-term success.<br>Mobila affärssystem bildar ett av de mest attraktiva marknadssegment inom mobila informationstjänster. Den tredje generationens mobila kommunikationssytem (3G) kommer att bli ett viktigt steg fram mot konvergensen mellan telekommunikationsoch datakommunikationsindustrin. Särskilt konvergensen som äger rum mellan mobila teknologier och Internet erbjuder utmanande möjligheter för framtida applikationer och lösningar. De flesta nuvarande företag och tjänster inom mobilbranschen kan dock snarast betraktas som ett bidrag till denna konvergens. Många av de nuvarande idéerna och lösningarna är nämligen baserade på avgränsningar och problem som uppstår vid kombination av mobila system med Internet-baserade tjänster. I framtiden, när mobila nät har vuxit ihop med Internet till en symbios, kommer det inte längre att vara möjligt att förtjäna på detta slag av lösningar. En konkret lösning är det mobila middleware-konceptet, som bildar en logisk koppling mellan mobila teknologier och Internet-världen. Detta examensarbete studerar middleware-konceptet från en 3G-orienterad synvinkel och framför strategiska råd för företag som erbjuder detta slag av tjänster. En detaljerad diskussion om utvecklingen efter 3G presenteras. Arbetet lägger fram alternativa lösningar och strategiska implikationer deriveras. Implikationerna är motiverade bl.a. av en intervjuunders ökning som utfördes i samband med detta arbete. Temat trådlös datasynkronisering diskuteras som ett exempel för provisorisk middleware. Mobila filsystem införs som en intressant möjlighet för affärsapplikationer. Diverse möjligheter för fjärrkontroll av en arbetsplatsstation studeras och mätningar bevisar deras genomförbarhet för trådlösa applikationstjänster. Framträdande mobila Java-teknologier analyseras och presenteras som ett efficient underlag för plattformoberoende end-to-end-lösningaröver lag. En av de rekommenderade strategierna är baserad på det hybrida klientkonceptet, vilket presenteras som en realistisk lösning förövergången från nuvarande middleware-system till en (osäker) framtid av nativa, tjocka terminalapplikationer. Den strategiska horisonten för detta är två år. Utgående från detta koncept utvecklades en prototyp som exempel för en sådan applikation. Arbetet definerar och diskuterar dessutom diverse strategiska scenarier. Slutligen nämns problematiken om operatörernas framtida differentieringsmöjligheter och rollen av affärssystem i en fullständigt IP-baserad värld. 3G nät och terminaler kommer att skapa ett stort antal nya användningar och affärsmöjligheter, men ändringen kommer också att medföra nya utmaningar och risker. Detta illustreras med hjälp av företaget Smartner som exempel för en leverant ör av mobila middleware-lösningar för affärsanvändningar. Som demonstrerat i detta fall, anses i jämförelse med nuvarande applikationslösningar en signifikant teknologisk reorientering vara nödvändig, för att bevara ett långvarigt perspektiv.<br>Langattomat yrityssovellukset ovat nykyään yksi kiinnostavimmista mobiilimarkkinoiden segmenteistä. Kolmannen sukupolven (3G) mobiilit viestintäjärjestelmät tulevat olemaan merkittävä askel kohti telekommunikaatioja dataliikennealojen yhdistymist ä (ns. konvergenssia). Itse asiassa mobiiliteknologian ja Internetin lähentyminen mahdollistaa entistä hyödyllisempien mobiilisovellusten ja -ratkaisuiden rakentamisen tulevaisuudessa. Tällä hetkellä useat mobiiliyritykset ja mobiilisovellusten tuottajat ovat kuitenkin osana tätä yhdistymisprosessia. Monet nykyiset ideat ja ratkaisut ottavat nimittäin lähtökohdakseen rajoitukset, joita nykyiset tietoliikenneverkot asettavat yhdistyessään Internet-pohjaisiin palveluihin. Tulevaisuudessa, kun mobiiliverkot ja Internet ovat yhdistyneet, ei ole enää mahdollista ansaita rahaa tällaisten perinteisten ratkaisuiden avulla. Yksi konkreettinen ratkaisumalli perustuu mobile middleware -käsitteeseen, joka liittää yhteen mobiiliteknologian ja Internetin. Tässä diplomityössä tutkitaan middleware- käsitettä yrityssovellusten tarjoamisessa erityisesti 3G-verkoissa, ja työssä esitellään strategisia suosituksia näiden sovelluspalveluiden tarjoajille. Työssä käyd ään perusteellisesti läpi kolmannen sukupolven jälkeistä kehitystä. Vaihtoehtoisia ratkaisuja esitellään, ja joitakin strategisia vaikutuksia tuodaan myös esille. Vaikutuksia perustellaan tuloksilla, joita tämän projektin osana tehty kysely paljasti. Tiedon langatonta synkronisointia tarkastellaan esimerkkinä tilapäisestä middlewaresta. Mobiileihin tiedostojärjestelmiin liittyvät asiat nähdään mielenkiintoisena mahdollisuutena yrityssovelluksille. Toimistojärjestelmien etäkäyttömahdollisuuksia on tutkittu ja niiden sopivuutta langattomaan sovellustarjontaan on mitattu. Kehittyviä mobiileja Java-teknologioita pidetään tehokkaana alustana, jonka avulla voidaan tarjota kaikkialla saatavilla olevia, päätelaiteriippumattomia ratkaisuja loppuasiakkaille. Yhtenä suositelluista strategioista tämä diplomityö esittelee yksinkertaisen päätelaitesovellusmallin, jonka avulla nykyisistä middleware-ratkaisuista voidaan siirtyä tulevaisuuden kehittyneempiin päätelaiteratkaisuihin kahden vuoden sisällä. Tähän konseptiin perustuen työssä on kehitetty esimerkki 3G-älypuhelimen sovelluksesta. Lisäksi esitellään ja arvioidaan mahdollisia strategisia skenaariovaihtoehtoja. Tämä diplomityö käsittelee myös operaattoreiden differointimahdollisuuksia ja yrityssovelluksia täysin IP-pohjaisissa verkoissa. 3G-verkot ja -päätelaitteet tuovat mukanaan laajan valikoiman uusia sovelluksia ja liiketoimintamahdollisuuksia, mutta tämä muutos merkitsee myös uusia haasteita ja riskejä. Tätä haastetta kuvataan tutkimuksen esimerkkiyrityksen Smartnerin tapauksessa, joka on yrityssovelluksiin fokusoitunut mobiilien middleware-ratkaisuiden tarjoaja. Tutkimus tuo esille, miten Smartnerin nykyiset sovellukset huomioon ottaen tarvitaan valtava teknologinen suunnanmuutos pitkäaikaisen perspektiivin säilyttämiseksi.
APA, Harvard, Vancouver, ISO, and other styles
37

Wilk, Artur. "Types for XML with Application to Xcerpt." Doctoral thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ba, Mouhamadou Lamine. "Exploitation de la structure des données incertaines." Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0013.

Full text
Abstract:
Cette thèse s’intéresse à certains problèmes fondamentaux découlant d’un besoin accru de gestion des incertitudes dans les applications Web multi-sources ayant de la structure, à savoir le contrôle de versions incertaines dans les plates-formes Web à large échelle, l’intégration de sources Web incertaines sous contraintes, et la découverte de la vérité à partir de plusieurs sources Web structurées. Ses contributions majeures sont : la gestion de l’incertitude dans le contrôle de versions de données arborescentes en s’appuyant sur un modèle XML probabiliste ; les étapes initiales vers un système d’intégration XML probabiliste de sources Web incertaines et dépendantes ; l’introduction de mesures de précision pour les données géographiques et ; la conception d’algorithmes d’exploration pour un partitionnement optimal de l’ensemble des attributs dans un processus de recherche de la vérité sur des sources Web conflictuelles<br>This thesis addresses some fundamental problems inherent to the need of uncertainty handling in multi-source Web applications with structured information, namely uncertain version control in Web-scale collaborative editing platforms, integration of uncertain Web sources under constraints, and truth finding over structured Web sources. Its major contributions are: uncertainty management in version control of treestructured data using a probabilistic XML model; initial steps towards a probabilistic XML data integration system for uncertain and dependent Web sources; precision measures for location data and; exploration algorithms for an optimal partitioning of the input attribute set during a truth finding process over conflicting Web sources
APA, Harvard, Vancouver, ISO, and other styles
39

Phoungphol, Piyaphol. "An Automated XML-Based Webform Management System." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_theses/41.

Full text
Abstract:
In a web application, “webform” plays an important role in providing interactions between users and a server. To develop a webform in conventional method, developers have to create many files including HTML-JavaScript, SQL script, and many server-side programs to process to data. In this thesis, we propose a new language, Webform Language (WFL). WFL can considerably decrease developing time of webform by describing it in XML and a parser will automatically generate all necessary files. In addition, we give an option for user to describe a webform in another language, called Simple Webform Language (SWFL). The syntax of a SWFL is simple and similar to the “CREATE TABLE” statement in SQL. When a parser parse webform description in SWFL, it translated the description to WFL first, and then processed it by as normal WFL.
APA, Harvard, Vancouver, ISO, and other styles
40

Hasan, Noor 1963. "Application of XML in B2B financial services." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9277.

Full text
Abstract:
Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2000.<br>Includes bibliographical references (leaves 100-104).<br>Financial services industry is undergoing tremendous transformation due to regulatory changes and technological developments. The thesis discusses these changes including the advent of internet and how it is impacting the financial services industry. The paper provides a detailed account of XML evolution and its comparison with SGML and HTML. Several organization bodies have been formed over the past few years to define and push XML based standards for various industries. Even though XML is still in its evolving stage, there is wide consensus that it will be the enabler for disparate systems to communicate with each other. The research provides an overview of various XML standards pertaining to financial services and firms behind these standards. The author derives the conclusion that several standards with in financial services will co-exist and the industry will converge to these standards. The thesis also provides an overview of some financial applications that are XML compliant along with examples of first mover financial services firms that have successfully applied XML to address systems issues. Based on the XML standards, changes in the industry and customer needs author predicts some future trends and milestones that will happen in the financial services industry. They include; General changes in industry Landscape, formation of Central Limit Order Book (CLOB), Emergence of HUBs and Exchanges, Global Straight Through Processing, Settlement time of T +O, Emergence of Aggregators and Enterprise Portals. The future trend section further discusses the role of XML in this changing environment and how it will help achieve some of the key break-throughs that were not possible before. In order to fully harness the potential of XML, firms need to understand the various elements of XML. The last section of the thesis provides an overview of internal factors; issues around understanding DTD's and other relevant factors firms need to consider for successful implementation. The factors are based on author's own understanding of XML, issues faced by financial services industry and interviews with financial services firms.<br>by Noor Hasan.<br>S.M.M.O.T.
APA, Harvard, Vancouver, ISO, and other styles
41

ZHAN, YUNSONG. "XML-BASED DATA INTEGRATION FOR APPLICATION INTROPERABILITY." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1020432798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pehcevski, Jovan, and jovanp@cs rmit edu au. "Evaluation of Effective XML Information Retrieval." RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080104.142709.

Full text
Abstract:
XML is being adopted as a common storage format in scientific data repositories, digital libraries, and on the World Wide Web. Accordingly, there is a need for content-oriented XML retrieval systems that can efficiently and effectively store, search and retrieve information from XML document collections. Unlike traditional information retrieval systems where whole documents are usually indexed and retrieved as information units, XML retrieval systems typically index and retrieve document components of varying granularity. To evaluate the effectiveness of such systems, test collections where relevance assessments are provided according to an XML-specific definition of relevance are necessary. Such test collections have been built during four rounds of the INitiative for the Evaluation of XML Retrieval (INEX). There are many different approaches to XML retrieval; most approaches either extend full-text information retrieval systems to handle XML retrieval, or use database technologies that incorporate existing XML standards to handle both XML presentation and retrieval. We present a hybrid approach to XML retrieval that combines text information retrieval features with XML-specific features found in a native XML database. Results from our experiments on the INEX 2003 and 2004 test collections demonstrate the usefulness of applying our hybrid approach to different XML retrieval tasks. A realistic definition of relevance is necessary for meaningful comparison of alternative XML retrieval approaches. The three relevance definitions used by INEX since 2002 comprise two relevance dimensions, each based on topical relevance. We perform an extensive analysis of the two INEX 2004 and 2005 relevance definitions, and show that assessors and users find them difficult to understand. We propose a new definition of relevance for XML retrieval, and demonstrate that a relevance scale based on this definition is useful for XML retrieval experiments. Finding the appropriate approach to evaluate XML retrieval effectiveness is the subject of ongoing debate within the XML information retrieval research community. We present an overview of the evaluation methodologies implemented in the current INEX metrics, which reveals that the metrics follow different assumptions and measure different XML retrieval behaviours. We propose a new evaluation metric for XML retrieval and conduct an extensive analysis of the retrieval performance of simulated runs to show what is measured. We compare the evaluation behaviour obtained with the new metric to the behaviours obtained with two of the official INEX 2005 metrics, and demonstrate that the new metric can be used to reliably evaluate XML retrieval effectiveness. To analyse the effectiveness of XML retrieval in different application scenarios, we use evaluation measures in our new metric to investigate the behaviour of XML retrieval approaches under the following two scenarios: the ad-hoc retrieval scenario, exploring the activities carried out as part of the INEX 2005 Ad-hoc track; and the multimedia retrieval scenario, exploring the activities carried out as part of the INEX 2005 Multimedia track. For both application scenarios we show that, although different values for retrieval parameters are needed to achieve the optimal performance, the desired textual or multimedia information can be effectively located using a combination of XML retrieval approaches.
APA, Harvard, Vancouver, ISO, and other styles
43

von, Wenckstern Michael. "Web applications using the Google Web Toolkit." Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.

Full text
Abstract:
This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be implemented in the Model-View-Presenter pattern to show that complex user interfaces can be created with the Google Web Toolkit. The Google Web Toolkit framework will be compared with the JavaServer Faces one to find out which toolkit is the right one for the next web project<br>Diese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
APA, Harvard, Vancouver, ISO, and other styles
44

Elbekai, Ali Sayeh. "Generic model for application driven XML data processing." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/55/.

Full text
Abstract:
XML technology has emerged during recent years as a popular choice for representing and exchanging semi-structured data on the Web. It integrates seamlessly with web-based applications. If data is stored and represented as XML documents, then it should be possible to query the contents of these documents in order to extract, synthesize and analyze their contents. This thesis for experimental study of Web architecture for data processing is based on semantic mapping of XML Schema. The thesis involves complex methods and tools for specification, algorithmic transformation and online processing of semi-structured data over the Web in XML format with persistent storage into relational databases. The main focus of the research is preserving the structure of original data for data reconciliation during database updates and also to combine different technologies for XML data processing such as storing (SQL), transformation (XSL Processors), presenting (HTML), querying (XQUERY) and transporting (Web services) using a common framework, which is both theoretically and technologically well grounded. The experimental implementation of the discussed architecture requires a Web server (Apache), Java container (Tomcat) and object-relational DBMS (Oracle 9) equipped with Java engine and corresponding libraries for parsing and transformation of XML data (Xerces and Xalan). Furthermore the central idea behind the research is to use a single theoretical model of the data to be processed by the system (XML algebra) controlled by one standard metalanguage specification (XML Schema) for solving a class of problems (generic architecture). The proposed work combines theoretical novelty and technological advancement in the field of Internet computing. This thesis will introduce a generic approach since both our model (XML algebra) and our problem solver (the architecture of the integrated system) are XML Schema- driven. Starting with the XML Schema of the data, we first develop domain-specific XML algebra suitable for data processing of the specific data and then use it for implementing the main offline components of the system for data processing.
APA, Harvard, Vancouver, ISO, and other styles
45

Hall, Benjamin Fisher. "XML theory and practice through an application feasibility study." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/17584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kalibjian, Jeffrey R. "THE IMPACT OF XML SECURITY STANDARDS ON MANAGING POST PROCESSED TELEMETRY DATA." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606667.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada<br>Today many organizations use the Secure Sockets Layer protocol (SSL, now known as TLS, or Transport Layer Security) to secure post processed telemetry data transmitted over internal or external Internet Protocol (IP) networks. While TLS secures data traveling over a network, it does not protect data after it reaches its end point. In the Open Systems Interconnection (OSI) layer model, TLS falls several layers below the application category. This implies that applications utilizing data delivered by TLS have no way of evaluating whether data has been compromised before TLS encryption (from a source), or after TLS decryption (at the destination). This security “gap” can be addressed by adoption of a security infrastructure that allows security operations to be abstracted at an OSI application level.
APA, Harvard, Vancouver, ISO, and other styles
47

Bargelis, Tautvydas. "Interaktyvios interneto sąsajos kūrimo metodika integruojant Flash, XML, ASP.NET, MSSQL technologijas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060111_153404-14817.

Full text
Abstract:
Internet in recent years has changed quite a lot, usual static user interfaces poorly satisfy user's requirements or do not at all. That is why there is a need to analyze a new breed of internet presentation-level technique – rich internet applications (RIA). Main RIA technologies and several internet projects were analyzed in this thesis. Macromedia Flash technology was chosen because of its flexibility, multiplatform implementation, rich user experience, good integration among various server technologies. An experimental system named Ferry transport booking system was built using Macromedia Flash technology and implemented with Microsoft server technologies. Experimental research of this project is described; a comparative analysis of rich internet application and simple static interface advantages is provided.
APA, Harvard, Vancouver, ISO, and other styles
48

Weisflog, Jens Nguema. "An XML application-based interface to developing modular system simulations." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/7853.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2008.<br>Thesis research directed by: Institute for Systems Research. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
49

Müller, Martin. "Vývoj aplikace pro prezentaci produktů zadavatele." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199743.

Full text
Abstract:
This master's thesis aims to develop a mobile application for the Google Android operating system based on customer requirements. Applications should serve to showcase customer's products, view calendar of courses and to let user book chosen course term. The work begins with research of a few chosen books focused on developing applications for Android and then client requirements are defined. Based on the results of research and defined customer requirements, detailed application specifications are laid down. Then there is presented architecture of whole application. The work ends with a pair of manuals. The first one is intended to the application administrator, whom it will serve during modifying and updating data. The second one is for user, whom it will assist during the process of application installation to the device and to introduce its control methods. The installation file of developed application is in the appendixes of this work.
APA, Harvard, Vancouver, ISO, and other styles
50

Lehman, Jeffrey L. "An Extensible Markup Language (XML) Application for the University Course Timetabling Problem." NSUWorks, 2004. http://nsuworks.nova.edu/gscis_etd/666.

Full text
Abstract:
The university course timetabling problem involves the assignment of instructors, courses, and course sections to meeting rooms, dates, and times. Timetabling research has generally focused on the algorithms and techniques for solving specific scheduling problems. The independent evaluation and comparison of timetabling problems and solutions is limited by the lack of a standard timetabling language. This dissertation created an Extensible Markup Language (XML) application, called Course Markup Language (CourseML), for the university course timetabling problem. CourseML addressed the need for a standardized timetabling language to facilitate the efficient exchange of timetabling data and provided a means for the independent evaluation and comparison of time tabling problems and solutions. A sample real-world university course timetabling problem was defined. CourseML was used to define the sample problem. CourseML was evaluated based on how well it captured the sample problem, including hard and soft constraints, and how well it represented a solution instance. The qualities that made CourseML a candidate for general use were identified. The set of characteristics that made XML an appropriate language for specifying university course timetabling problems and solutions were identified.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography