To see the other types of publications on this topic, follow the link: MathML (Document markup language).

Journal articles on the topic 'MathML (Document markup language)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'MathML (Document markup language).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

J.G. White, Jason. "The Accessibility of Mathematical Notation on the Web and Beyond." Journal of Science Education for Students with Disabilities 23, no. 1 (October 21, 2020): 1–14. http://dx.doi.org/10.14448/jsesd.12.0013.

Full text
Abstract:
This paper serves two purposes. First, it offers an overview of the role of the Mathematical Markup Language (MathML) in representing mathematical notation on the Web, and its significance for accessibility. To orient the discussion, hypotheses are advanced regarding users’ needs in connection with the accessibility of mathematical notation. Second, current developments in the evolution of MathML are reviewed, noting their consequences for accessibility, and commenting on prospects for future improvement in the concrete experiences of users of assistive technologies. Recommendations are advanced for further research and development activities, emphasizing the cognitive aspects of user interface design.
APA, Harvard, Vancouver, ISO, and other styles
2

Saadawi, Gilan M., and James H. Harrison. "Definition of an XML Markup Language for Clinical Laboratory Procedures and Comparison with Generic XML Markup." Clinical Chemistry 52, no. 10 (October 1, 2006): 1943–51. http://dx.doi.org/10.1373/clinchem.2006.071449.

Full text
Abstract:
Abstract Background: Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Methods: Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. Results: CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. Conclusion: In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.
APA, Harvard, Vancouver, ISO, and other styles
3

Power, Richard, Donia Scott, and Nadjet Bouayad-Agha. "Document Structure." Computational Linguistics 29, no. 2 (June 2003): 211–60. http://dx.doi.org/10.1162/089120103322145315.

Full text
Abstract:
We argue the case for abstract document structure as a separate descriptive level in the analysis and generation of written texts. The purpose of this representation is to mediate between the message of a text (i.e., its discourse structure) and its physical presentation (i.e., its organization into graphical constituents like sections, paragraphs, sentences, bulleted lists, figures, and footnotes). Abstract document structure can be seen as an extension of Nunberg's “text-grammar” it is also closely related to “logical” markup in languages like HTML and LaTEX. We show that by using this intermediate representation, several subtasks in language generation and language understanding can be defined more cleanly.
APA, Harvard, Vancouver, ISO, and other styles
4

Bleeker, J. "Standard Generalized Markup Language (SGML)." Toegepaste Taalwetenschap in Artikelen 28 (January 1, 1987): 154–81. http://dx.doi.org/10.1075/ttwia.28.14ble.

Full text
Abstract:
The traditional way of creating and typesetting a manuscript hampers the necessary modernization of the production process and particularly the dissemination and accessibility of information. This is caused by the use of wordprocessing packages and the nature of typesetting instructions. Both wordprocessing codes and typesetting codes contain insufficient information, because they only aim at a single presentation of the text. Scientific publications, however, can be distributed in many different forms: on paper, in all possible layouts; in whole or in part via electronic means such as floppy disks, compact disks, datacommunication, etc. In addition information should be accessible from many points of view. New electronic tools (i.e. micro-computers) and databases with advanced search software have the technical possibilities for this. The Standard Generalized Markup Language, the new ISO-standard, is a method of recording texts in such a way that the afore-mentioned can be achieved. This method has two basic principles: 1. the descriptors of texts (called SGML-tags) must be based on content and not on form. 2. the SGML-tags used for description of texts must be defined in a document description. This is based on the principle that texts are structured, i.e. independent of its purpose. It enables one to describe the elements of which a text consists, the order they have to be in the text, and whether they are optional or obligatory, and/or repetitive. The description of the content of texts makes it possible to create conversions (via software) to a diversity of printed and electronic forms (distribution). It is also possible to search databases for e.g. an article about a certain subject or written by an author of a special institute or a university (information retrieval).
APA, Harvard, Vancouver, ISO, and other styles
5

Chu, Josey Y. M., William L. Palya, and Donald E. Walter. "Creating a hypertext markup language document for an information server." Behavior Research Methods, Instruments, & Computers 27, no. 2 (June 1995): 200–205. http://dx.doi.org/10.3758/bf03204732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hucka, Michael, Frank T. Bergmann, Stefan Hoops, Sarah M. Keating, Sven Sahle, James C. Schaff, Lucian P. Smith, and Darren J. Wilkinson. "The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 382–549. http://dx.doi.org/10.1515/jib-2015-266.

Full text
Abstract:
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
APA, Harvard, Vancouver, ISO, and other styles
7

Cuellar, Autumn, Warren Hedley, Melanie Nelson, Catherine Lloyd, Matt Halstead, David Bullivant, David Nickerson, Peter Hunter, and Poul Nielsen. "The CellML 1.1 Specification." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 4–85. http://dx.doi.org/10.1515/jib-2015-259.

Full text
Abstract:
Summary This document specifies CellML 1.1, an XML-based language for describing and exchanging models of cellular and subcellular processes. MathML embedded in CellML documents is used to define the underlying mathematics of models. Models consist of a network of re- usable components, each with variables and equations manipulating those variables. Models may import other models to create systems of increasing complexity. Metadata may be embedded in CellML documents using RDF.
APA, Harvard, Vancouver, ISO, and other styles
8

Kahn, C. E. "A Generalized Language for Platform-Independent Structured Reporting." Methods of Information in Medicine 36, no. 03 (July 1997): 163–71. http://dx.doi.org/10.1055/s-0038-1636826.

Full text
Abstract:
Structured reporting systems allow health-care workers to record observations using predetermined data elements and formats. The author developed the Data-entry and Reporting Markup Language (DRML) to provide a generalized representational language for describing concepts to be included in structured reporting applications. DRML is based on the Standard Generalized Markup Language (SGML), an internationally accepted standard for document interchange. The use of DRML is demonstrated with the SPIDER system, which uses public-domain Internet technology for structured data entry and reporting. SPIDER uses DRML documents to create structured data-entry forms, outline-format textual reports, and datasets for analysis of aggregate results. Applications of DRML include its use in radiology results reporting and a health status questionnaire. DRML allows system designers to create a wide variety of clinical reporting applications and survey instruments, and helps overcome some of the limitations seen in earlier structured reporting systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Vacharaskunee, Sutheetutt, and Sarun Intakosum. "A Method of Recommendation the Most Used XML Tags." Advanced Materials Research 931-932 (May 2014): 1353–59. http://dx.doi.org/10.4028/www.scientific.net/amr.931-932.1353.

Full text
Abstract:
Processing of a large data set which is known for today as big data processing is still a problem that has not yet a well-defined solution. The data can be both structured and unstructured. For the structured part, eXtensible Markup Language (XML) is a major tool that freely allows document owners to describe and organize their data using their markup tags. One major problem, however, behind this freedom lies in the big data retrieving process. The same or similar information that are described using the different tags or different structures may not be retrieved if the query statements contains different keywords to the one used in the markup tags. The best way to solve this problem is to specify a standard set of the markup tags for each problem domain. The creation of such a standard set if done manually requires a lot of hard work and is a time consuming process. In addition, it may be hard to define terms that are acceptable by all people. This research proposes a model for a new technique, XML Tag Recommendation (XTR) that aims to solve this problem. This technique applies the idea of Case Base Reasoning (CBR) by collecting the most used tags in each domain as a case. These tags come from the collection of related words in WordNet. The WordCount that is the web site to find the frequency of words is applied to choose the most used one. The input (problem) to the XTR system is an XML document contains the tags specified by the document owners. The solution is a set of the recommended tags, which is the most used tags, for the problem domain of the document. Document owners have a freedom to change or not change the tags in their documents and can provide feedback to the XTR system.
APA, Harvard, Vancouver, ISO, and other styles
10

Hucka, Michael, Frank T. Bergmann, Andreas Dräger, Stefan Hoops, Sarah M. Keating, Nicolas Le Novère, Chris J. Myers, et al. "Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 731–901. http://dx.doi.org/10.1515/jib-2015-271.

Full text
Abstract:
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
APA, Harvard, Vancouver, ISO, and other styles
11

Santhosh Baboo, S., and Nikhil Lobo. "Thinking Outside Conventional Aerospace and Defense Technical Publications Using Standard Generalized Markup Language (SGML)." Open Aerospace Engineering Journal 2, no. 1 (October 16, 2009): 19–27. http://dx.doi.org/10.2174/1874146000902010019.

Full text
Abstract:
In Aerospace and Defense, documentation is of a very large size, highly structured and needs constant updating. Managing this documentation has been a constant challenge to this industry. Moreover accuracy of data is a critical aspect of constant worry to publication managers. At present, documentation is being created using traditional publishing software resulting in wastage of time and effort. Time is spent in formatting documents instead of creation of content. Each time a document is created or updated formatting has to be applied manually. Preparing documents for print or web requires complete reformatting. Content is not structured across similar types of publications resulting in no consistency. Standard Generalized Mark-up Language (SGML) allows a document to be broken up into modules allowing reusability. SGML enforces content to be developed in a structured manner maintaining consistency across publications. This structured approach is achieved using a Document Type Definition (DTD). Separation of content from formatting is achieved using Format Output Specification Instance (FOSI).
APA, Harvard, Vancouver, ISO, and other styles
12

Hussein Toman, Sarah. "THE DESIGN OF A TEMPLATING LANGUAGE TO EMBED DATABASE QUERIES INTO DOCUMENTS." Journal of Education College Wasit University 1, no. 29 (January 16, 2018): 512–34. http://dx.doi.org/10.31185/eduj.vol1.iss29.168.

Full text
Abstract:
Presenting information from a database to a human readership is one of the usual tasks in software development. Commonly, an imperative language (such as: PHP, C#, Java, etc.) is used to query a database system and populate with the desired information the application's GUI, a web page or a printed report (referred from now on as Presentation Media). Virtually all database systems are now capable to format, sort and group the data stored in a database, and last but not least to perform calculations against it. These are most of the time enough to prepare the information that is going to be shown on screen or paper. Thus it leaves just one role for the imperative code: to glue the query results to the Presentation Media. This code tends to become repetitive and grows proportionally with the complexity of the Presentation Media. The need for software developers to write this imperative code can be eliminated thought. Instead, the markup code (HTML, LaTEX, etc) can have the ability to bind its elements directly to the database system. To achieve this ability, I propose mixing the Presentation Media’s markup code with a Templating Language. This paper elaborates the design of a Templating Language, a declarative language that adds annotations to any markup code regarding what data will be queried and how should it be integrated in document. For this markup code to be consumed, there won't be necessary to implement any database query abilities in the process that renderers it. Instead, a preprocessor will be invoked to interpret the Templating Language, connect to the database system and query the desired data, respectively to generate the final markup code.
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Ying, Jing Chen, and Jian Song. "Research on Medical Information Cross-Regional Integration Scheme." Applied Mechanics and Materials 496-500 (January 2014): 2182–87. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.2182.

Full text
Abstract:
Cross-regional medical information sharing is a research hotspot in the field of current regional health informatization. This paper put forward a system architecture based on IHE-XDS (Integrating Healthcare Enterprise-Cross Enterprise Document Sharing) technology framework, meeting the requirements of ebXML (Electronic Business using eXtensible Markup Language) and distributed access to patient medical documents, and completed the document registration inquiry services and achieved document sharing by studying the mapping relationships between IHE-XDS and ebXML information models.
APA, Harvard, Vancouver, ISO, and other styles
14

Gu, Huan. "Research and Development for Makeup Object Moderl Agent Layer about XML Standard of Profession Field." Advanced Materials Research 314-316 (August 2011): 2152–57. http://dx.doi.org/10.4028/www.scientific.net/amr.314-316.2152.

Full text
Abstract:
This document explains and demonstrates how to construct JDF data agent tools on .NET Linq platform. This Agent has the ability to create a Job, to add Nodes to an existing Job, and to modify existing Nodes. It is based on the structure of JDF standard and the definition of markup, and packages the node of each layer and its complicated parameters and data type into the object, forming a programming language model that is based on JDF markup object, and reducing the complexity of developing the printing digital process software basing-on JDFXML standard, providing a reference for developing the same distributed digital system basing-on XML driver.
APA, Harvard, Vancouver, ISO, and other styles
15

Haghish, E. F. "Markdoc: Literate Programming in Stata." Stata Journal: Promoting communications on statistics and Stata 16, no. 4 (December 2016): 964–88. http://dx.doi.org/10.1177/1536867x1601600409.

Full text
Abstract:
Rigorous documentation of the analysis plan, procedure, and computer codes enhances the comprehensibility and transparency of data analysis. Documentation is particularly critical when the codes and data are meant to be publicly shared and examined by the scientific community to evaluate the analysis or adapt the results. The popular approach for documenting computer codes is known as literate programming, which requires preparing a trilingual script file that includes a programming language for running the data analysis, a human language for documentation, and a markup language for typesetting the document. In this article, I introduce markdoc, a software package for interactive literate programming and generating dynamic-analysis documents in Stata. markdoc recognizes Mark-down, LATEX, and HTML markup languages and can export documents in several formats, such as PDF, Microsoft Office .docx, OpenOffice and LibreOffice .odt, LATEX, HTML, ePub, and Markdown.
APA, Harvard, Vancouver, ISO, and other styles
16

Bergmann, Frank T., Jonathan Cooper, Nicolas Le Novère, David Nickerson, and Dagmar Waltemath. "Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 119–212. http://dx.doi.org/10.1515/jib-2015-262.

Full text
Abstract:
Summary The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines.This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
APA, Harvard, Vancouver, ISO, and other styles
17

Policarpio, Sean, and Yan Zhang. "A Formal Language for XML Authorisations Based on Answer Set Programming and Temporal Interval Logic Constraints." International Journal of Secure Software Engineering 2, no. 1 (January 2011): 22–39. http://dx.doi.org/10.4018/jsse.2011010102.

Full text
Abstract:
The Extensible Markup Language is susceptible to security breaches because it does not incorporate methods to protect the information it encodes. This work focuses on the development of a formal language that can provide role-based access control to information stored in XML formatted documents. This language has the capacity to reason whether access to an XML document should be allowed. The language, Axml(T), allows for the specification of authorisations on XML documents and distinguishes itself from other research with the inclusion of temporal interval reasoning and the XPath query language.
APA, Harvard, Vancouver, ISO, and other styles
18

Cai, Li Min. "Application of XML in the Remote Temperature Monitoring System." Advanced Materials Research 433-440 (January 2012): 6509–13. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.6509.

Full text
Abstract:
This paper Introduces XML (Extensible Markup Language), describes the Remote Temperature Monitoring System program. Samsung S3C2440 Microprocessor as the core of this system, Embedded Linux System and web server are transplanted, accomplish the on-site collection of temperature by the digital temperature sensor DS18B20, acquired data is saved in XML document, on-site real-time temperature can be displayed on a browser by a remote end. The results of actual runs show the effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
19

Peis, Eduardo, Félix de Moya, and J. Carlos Fernández‐Molina. "Encoded archival description (EAD) conversion: a methodological proposal." Library Hi Tech 18, no. 4 (December 1, 2000): 360–68. http://dx.doi.org/10.1108/07378830010360455.

Full text
Abstract:
The eventual adaptation of archives to new technological possibilities could begin with the creation of digital versions of archival finding aids, which would allow the international diffusion of descriptive information. The Standard Generalized Markup Language (SGML), document type definition (DTD) for archival description known as encoded archival description (EAD) is an appropriate tool for this purpose. Presents a methodological strategy that begins with an analysis of EAD and the informational object to be marked up, allowing the semiautomatic creation of a digital version.
APA, Harvard, Vancouver, ISO, and other styles
20

Davis, G. L., Edward F. Gilman, and Howard W. Beck. "An Electronically Based Horticultural Information Retrieval System." HortTechnology 6, no. 4 (October 1996): 332–36. http://dx.doi.org/10.21273/horttech.6.4.322.

Full text
Abstract:
A large horticultural database and an electronic retrieval system for extension education programs were developed using compact disk-read only memory (CD-ROM) and World Wide Web (WWW) as the medium for information delivery. Object-oriented database techniques were used to organize the information. Conventional retrieval techniques including hypertext, full text searching, and expert systems were integrated into a complete package for accessing information stored in the database. A multimedia user interface was developed to provide a variety of capabilities including computer graphics and high resolution digitized images. Information for the CD-ROM was gathered from extension publications that were tagged using the standard generalized markup language (SGML)-based document markup language (International Standards Organization, 1986). Combining funds from the state legislator with grants from the USDA and other institutions, the CD-ROM system has been implemented in all 67 county extension offices in Florida and is available to the public as a for-sale CD-ROM. Public access is also available to most of the database through the WWW.
APA, Harvard, Vancouver, ISO, and other styles
21

Gilman, E. F., and H. Beck. "The CD-ROM–World Wide Web Hybrid." HortScience 32, no. 3 (June 1997): 553D—553. http://dx.doi.org/10.21273/hortsci.32.3.553d.

Full text
Abstract:
A large horticultural database and an electronic retrieval system for extension education programs were developed using compact disk-read only memory (CD-ROM) and World Wide Web (WWW) as the medium for information delivery. Object-oriented database techniques were used to organize the information. Conventional retrieval techniques including hypertext, full text searching, and expert systems were integrated into a complete package for accessing information stored in the database. A multimedia user interface was developed to provide a variety of capabilities, including computer graphics and high-resolution digitized images. Information for the CD-ROM was gathered from extension publications that were tagged using the Standard Generalized Markup Language (SGML) -based document markup language (International Standards Organization, 1986). Combining funds from the state legislator with grants from the USDA, and other institutions, the CD-ROM system has been implemented in all 67 county extension offices in Florida and is available to the public as a for sale CD-ROM. Public access is also available to most of the database through the WWW.
APA, Harvard, Vancouver, ISO, and other styles
22

Ran, Peipei, Wenjie Yang, Zhongyue Da, and Yuke Huo. "Work orders management based on XML file in printing." ITM Web of Conferences 17 (2018): 03009. http://dx.doi.org/10.1051/itmconf/20181703009.

Full text
Abstract:
The Extensible Markup Language (XML) technology is increasingly used in various field, if it’s used to express the information of work orders will improve efficiency for management and production. According to the features, we introduce the technology of management for work orders and get a XML file through the Document Object Model (DOM) technology in the paper. When we need the information to conduct production, parsing the XML file and save the information in database, this is beneficial to the preserve and modify for information.
APA, Harvard, Vancouver, ISO, and other styles
23

Hsu, Wen-Chiao, and I.-En Liao. "UCIS-X: An Updatable Compact Indexing Scheme for Efficient Extensible Markup Language Document Updating and Query Evaluation." IEEE Access 8 (2020): 176375–92. http://dx.doi.org/10.1109/access.2020.3025566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

ALMENDROS-JIMÉNEZ, J. M., A. BECERRA-TERÓN, and F. J. ENCISO-BAÑOS. "Querying XML documents in logic programming." Theory and Practice of Logic Programming 8, no. 3 (May 2008): 323–61. http://dx.doi.org/10.1017/s1471068407003183.

Full text
Abstract:
AbstractExtensible Markup Language (XML) is a simple, very flexible text format derived from SGML. Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. XPath language is the result of an effort to provide address parts of an XML document. In support of this primary purpose, it becomes in a query language against an XML document. In this paper we present a proposal for the implementation of the XPath language in logic programming. With this aim we will describe the representation of XML documents by means of a logic program. Rules and facts can be used for representing the document schema and the XML document itself. In particular, we will present how to index XML documents in logic programs: rules are supposed to be stored in main memory, however facts are stored in secondary memory by using two kind of indexes: one for each XML tag, and other for each group of terminal items. In addition, we will study how to query by means of the XPath language against a logic program representing an XML document. It evolves the specialization of the logic program with regard to the XPath expression. Finally, we will also explain how to combine the indexing and the top-down evaluation of the logic program.
APA, Harvard, Vancouver, ISO, and other styles
25

Ma, Zongmin, Chengwei Li, and Li Yan. "Reengineering Probabilistic Relational Databases with Fuzzy Probability Measures into XML Model." Journal of Database Management 28, no. 3 (July 2017): 26–47. http://dx.doi.org/10.4018/jdm.2017070102.

Full text
Abstract:
This paper concentrates on modeling probabilistic events with fuzzy probability measures in relational databases and XML (Extensible Markup Language). Instead of crisp probability degrees or interval probability degrees, fuzzy sets are applied to represent imprecise probability degrees in relational databases and XML. A probabilistic XML model with fuzzy probability measures is introduced, which incorporates fuzzy probability measures to handle imprecision and uncertainty. In particular, the formal approach to reengineering the relational database model with fuzzy probability measures into the DTD (document type definition) model with fuzzy probability measures is developed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
26

Gan, Yi, Ming Zhao, and Zhi Wei Zhang. "Research on the Describing for Products Information Evolution Based on TBS." Advanced Materials Research 279 (July 2011): 388–93. http://dx.doi.org/10.4028/www.scientific.net/amr.279.388.

Full text
Abstract:
Based on the analysis of the product design and its process, by using Top Basic Skeleton (TBS) model as a link, and adopting eXtensible Markup Language (XML) to the product’s top-down design and the description of the information evolution, this paper defines the transfer process and the structure of the document of the information transfer in TBS modeling process, and describes the transmission of the design information layer by layer. As a conclusion, the combination of the TBS and XML is beneficial to control the product design process, and to do variant design.
APA, Harvard, Vancouver, ISO, and other styles
27

Fong, Joseph, and Herbert Shiu. "An Interpreter Approach for Exporting Relational Data into XML Documents with Structured Export Markup Language." Journal of Database Management 23, no. 1 (January 2012): 49–77. http://dx.doi.org/10.4018/jdm.2012010103.

Full text
Abstract:
Almost all enterprises use relational databases to handle real time business operations and most need to generate various XML documents for data exchanges internally among various departments and externally with business partners. Exporting data in a relational database to an XML document can be considered a data conversion process. Based on the four approaches for data conversion: Customized program, Interpretive transformer, Translator generator, and Logical level translation, this paper proposes a new interpretive approach using Structured Export Markup Language (SEML) interpreter for converting relational data into XML documents. The frameworks and languages proposed by other researchers are neither generic nor able to generate arbitrary XML documents. Therefore, SEML interpreter is a simple, user friendly, and complete solution with a new mark-up language ? SEML ? for data conversion. The solution can be used as a generic tool for extracting, transforming, and loading (ETL) purposes. In other words, the SEML interpreter is a solution for relational databases similar to what X-Query is for XML databases.
APA, Harvard, Vancouver, ISO, and other styles
28

Guo, Jinqiu, Akira Takada, Koji Tanaka, Junzo Sato, Muneou Suzuki, Toshiaki Suzuki, Yusei Nakashima, Kenji Araki, and Hiroyuki Yoshihara. "The Development of MML (Medical Markup Language) Version 3.0 as a Medical Document Exchange Format for HL7 Messages." Journal of Medical Systems 28, no. 6 (December 2004): 523–33. http://dx.doi.org/10.1023/b:joms.0000044955.51844.c3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lister, Allyson L., Matthew Pocock, and Anil Wipat. "Integration of constraints documented in SBML, SBO, and the SBML Manual facilitates validation of biological models." Journal of Integrative Bioinformatics 4, no. 3 (December 1, 2007): 252–63. http://dx.doi.org/10.1515/jib-2007-80.

Full text
Abstract:
Abstract The creation of quantitative, simulatable, Systems Biology Markup Language (SBML) models that accurately simulate the system under study is a time-intensive manual process that requires careful checking. Currently, the rules and constraints of model creation, curation, and annotation are distributed over at least three separate documents: the SBML schema document (XSD), the Systems Biology Ontology (SBO), and the “Structures and Facilities for Model Definition” document. The latter document contains the richest set of constraints on models, and yet it is not amenable to computational processing. We have developed a Web Ontology Language (OWL) knowledge base that integrates these three structure documents, and that contains a representative sample of the information contained within them. This Model Format OWL (MFO) performs both structural and constraint integration and can be reasoned over and validated. SBML Models are represented as individuals of OWL classes, resulting in a single computationally amenable resource for model checking. Knowledge that was only accessible to humans is now explicitly and directly available for computational approaches. The integration of all structural knowledge for SBML models into a single resource creates a new style of model development and checking.
APA, Harvard, Vancouver, ISO, and other styles
30

Farewell, Stephanie M. "An Introduction to XBRL through the Use of Research and Technical Assignments." Journal of Information Systems 20, no. 1 (March 1, 2006): 161–85. http://dx.doi.org/10.2308/jis.2006.20.1.161.

Full text
Abstract:
This project is designed to facilitate an understanding of eXtensible Business Reporting Language (XBRL). The materials are structured so that each can be used independently of the other components. The materials consist of a reading, research assignments, and two technical assignments. The reading is written to provide a background on XBRL. After obtaining a basic understanding of XBRL, research and technical assignments are used to increase the student's skill-set. The research assignments look at the evolution of XBRL. The first technical assignment modifies and styles eXtensible Markup Language (XML) tagged data. In the second technical assignment an industry extension is developed to the promulgated Commercial and Industrial (C-I) taxonomy. The second technical assignment concludes with the creation of an instance document and viewing of the instance document with a style sheet. Through an understanding of XBRL, students will possess an important basic skill-set for a technology that will likely play a significant role in the future of accounting. In addition, they should have an appreciation for the purpose of XBRL, including the nature of the technology and the inherent challenges.
APA, Harvard, Vancouver, ISO, and other styles
31

BRÜGGEMANN-KLEIN, ANNE, and DERICK WOOD. "THE REGULARITY OF TWO-WAY NONDETERMINISTIC TREE AUTOMATA LANGUAGES." International Journal of Foundations of Computer Science 13, no. 01 (February 2002): 67–81. http://dx.doi.org/10.1142/s0129054102000959.

Full text
Abstract:
We establish that regularly extended two-way nondeterministic tree automata with unranked alphabets have the same expressive power as regularly extended nondeterministic tree automata with unranked alphabets alphabets. We obtain this results by establishing regularly extended versions of a congruence on trees and of a congruence on, so called, views. Our motivation for the study of these tree models is the Extensible Markup Language (XML), a metalanguage for defining document grammars. Such grammars have regular sets of right-hand sides for their productions and tree automata provide an alternative and useful modeling tool for them. In particular, we believe that they provide a useful computational model for what we call caterpillar expressions.
APA, Harvard, Vancouver, ISO, and other styles
32

SIERRA, JOSÉ LUIS, BALTASAR FERNÁNDEZ-MANJÓN, ALFREDO FERNÁNDEZ-VALMAYOR, and ANTONIO NAVARRO. "DOCUMENT-ORIENTED DEVELOPMENT OF CONTENT-INTENSIVE APPLICATIONS." International Journal of Software Engineering and Knowledge Engineering 15, no. 06 (December 2005): 975–93. http://dx.doi.org/10.1142/s0218194005002634.

Full text
Abstract:
In this paper we promote a document-oriented approach to the development of content-intensive applications (i.e., applications that critically depend on the informational contents and on the characterization of the contents' structure). This approach is the result of our experience as developers in the educational and in the hypermedia domains, as well as in the domain of knowledge-based systems. The main reason for choosing the document-oriented approach is to make it easier for domain experts to comprehend the elements that represent the main application's features. Among these elements are: the application's contents, the application's customizable properties including those of its interface, and the structure of all this information. Therefore, in our approach, these features are represented by means of a set of application documents, which are marked up using a suitable descriptive Domain-Specific Markup Language (DSML). If this goal is fully accomplished, the application itself can be automatically produced by processing those documents with a suitable processor for the DSML defined. The document-oriented development enhances the production and maintenance of content-intensive applications, because the applications' features are described in the form of human-readable and editable documents, understandable by domain experts and suitable for automatic processing. Nevertheless, the main drawbacks of the approach are the planning overload of the whole production process and the costs of the provision and maintenance of the DSMLs and their processors. These drawbacks can be palliated by adopting an incremental strategy for the production and maintenance of the applications and also for the definition and the operationalization of the DSMLs.
APA, Harvard, Vancouver, ISO, and other styles
33

Brown, Jeff, Rebecca Brown, Chris Velado, and Ron Vetter. "On the Design and Implementation of Interactive XML Applications." International Journal of Information Retrieval Research 1, no. 1 (January 2011): 19–30. http://dx.doi.org/10.4018/ijirr.2011010102.

Full text
Abstract:
This paper describes issues and challenges in the design and implementation of interactive client-server applications where program logic is expressed in terms of an extensible markup language (XML) document. Although the technique was originally developed for creating interactive short message service (SMS) applications, it has expanded and is used for developing interactive web applications. XML-Interactive (or XML-I) defines the program states and corresponding actions. Because many interactive applications require sustained communication between the client and the underlying information service, XML-I has support for session management. This allows state information to be managed in a dynamic way. The paper describes several applications that are implemented using XML-I and discusses design issues. The software framework has been implemented in a Java environment.
APA, Harvard, Vancouver, ISO, and other styles
34

HACHEY, B., C. GROVER, and R. TOBIN. "Datasets for generic relation extraction." Natural Language Engineering 18, no. 1 (March 9, 2011): 21–59. http://dx.doi.org/10.1017/s1351324911000106.

Full text
Abstract:
AbstractA vast amount of usable electronic data is in the form of unstructured text. The relation extraction task aims to identify useful information in text (e.g. PersonW works for OrganisationX, GeneY encodes ProteinZ) and recode it in a format such as a relational database or RDF triplestore that can be more effectively used for querying and automated reasoning. A number of resources have been developed for training and evaluating automatic systems for relation extraction in different domains. However, comparative evaluation is impeded by the fact that these corpora use different markup formats and notions of what constitutes a relation. We describe the preparation of corpora for comparative evaluation of relation extraction across domains based on the publicly available ACE 2004, ACE 2005 and BioInfer data sets. We present a common document type using token standoff and including detailed linguistic markup, while maintaining all information in the original annotation. The subsequent reannotation process normalises the two data sets so that they comply with a notion of relation that is intuitive, simple and informed by the semantic web. For the ACE data, we describe an automatic process that automatically converts many relations involving nested, nominal entity mentions to relations involving non-nested, named or pronominal entity mentions. For example, the first entity is mapped from ‘one’ to ‘Amidu Berry’ in the membership relation described in ‘Amidu Berry, one half of PBS’. Moreover, we describe a comparably reannotated version of the BioInfer corpus that flattens nested relations, maps part-whole to part-part relations and maps n-ary to binary relations. Finally, we summarise experiments that compare approaches to generic relation extraction, a knowledge discovery task that uses minimally supervised techniques to achieve maximally portable extractors. These experiments illustrate the utility of the corpora.1
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Chun‐Che, and Chia‐Ming Kuo. "The transformation and search of semi‐structured knowledge in organizations." Journal of Knowledge Management 7, no. 4 (October 1, 2003): 106–23. http://dx.doi.org/10.1108/13673270310492985.

Full text
Abstract:
Knowledge is perceived as very important asset for organizations and knowledge management is critical for organization competitiveness. Because the nature of knowledge is always represented as complex and varied, it is difficult to extend effectiveness of knowledge re‐use in organizations. In this article, an approach based on the Zachman’s Framework to externalize organizational knowledge into semi‐structured knowledge is developed, and eXtensible Markup Language (XML) is applied to transform the knowledge into documents. In addition, latent semantic indexing (LSI), which is capable of solving problems of synonyms and antonyms, as well as improving accuracy of document searches, is incorporated to facilitate search of semi‐structured knowledge (SSK) documents based on user demands. The SSK approach shows great promise for organizations to acquire, store, disseminate, and reuse knowledge.
APA, Harvard, Vancouver, ISO, and other styles
36

Ahmadpour, Ahmad. "The Improvement of Governance Decision Making Using XBRL." International Journal of E-Business Research 7, no. 2 (April 2011): 11–18. http://dx.doi.org/10.4018/jebr.2011040102.

Full text
Abstract:
eXtensible Business Reporting Language (XBRL) has the potential to influence users’ processing of financial information and their judgments and decisions. XBRL is an eXtensible Markup Language (XML)-based language, developed specifically for financial reporting. XBRL, as a search-facilitating technology, contributes to direct searches and simultaneous presentation of related financial statement, and facilitates processing footnote information which could help financial statements’ users. XBRL is more than a distribution mechanism for data or facilitating technology. XBRL has the potential to significantly improve corporate governance. Putting that potential into practice requires an XBRL taxonomy model that is data based instead of document based. This paper hypothesizes that in the presence of search-facilitating technology, users’ judgments of financial statement reliability will be influenced by the choice of recognition versus disclosure of stock option compensation than in the absence of search-facilitating technology. When the stock option accounting varies between two firms, the search technology helps in both acquiring and integrating relevant information. The paper suggests the implementation of XBRL improves transparency of financial information and managers’ choices for reporting that information.
APA, Harvard, Vancouver, ISO, and other styles
37

FONG, JOSEPH, HERBERT SHIU, and JENNY WONG. "METHODOLOGY FOR DATA CONVERSION FROM XML DOCUMENTS TO RELATIONS USING EXTENSIBLE STYLESHEET LANGUAGE TRANSFORMATION." International Journal of Software Engineering and Knowledge Engineering 19, no. 02 (March 2009): 249–81. http://dx.doi.org/10.1142/s0218194009004131.

Full text
Abstract:
Extensible Markup Language (XML) has been used for data-transport and data-transformation while the business sector continues to store critical business data in relational databases. Extracting relational data and formatting it into XML documents, and then converting XML documents back to relational structures, becomes a major daily activity. It is important to have an efficient methodology to handle this conversion between XML documents and relational data. This paper aims to perform data conversion from XML documents into relational databases. It proposes a prototype and algorithms for this conversion process. The pre-process is schema translation using an XML schema definition. The proposed approach is based on the needs of an Order Information System to suggest a methodology to gain the benefits provided by XML technology and relational database management systems. The methodology is a stepwise procedure using XML schema definition and Extensible Stylesheet Language Transformations (XSLT) to ensure that the data constraints are not scarified after data conversion. The implementation of the data conversion is performed by decomposing the XML document of a hierarchical tree model into normalized relations interrelated with their artifact primary keys and foreign keys. The transformation process is performed by XSLT. This paper will also demonstrate the entire conversion process through a detailed case study.
APA, Harvard, Vancouver, ISO, and other styles
38

Lu, Quan, Gao Liu, and Jing Chen. "Integrating PDF interface into Java application." Library Hi Tech 32, no. 3 (September 9, 2014): 495–508. http://dx.doi.org/10.1108/lht-01-2014-0009.

Full text
Abstract:
Purpose – The purpose of this paper is to propose a novel approach to integrate portable document format (PDF) interface into Java-based digital library application. It bridges the gap between conducting content operation and viewing on PDF document asynchronously. Design/methodology/approach – In this paper, the authors first review some related research and discuss PDF and its drawbacks. Next, the authors propose the design steps and implementation of three modes of displaying PDF document: PDF display, image display and extensible markup language (XML) display. A comparison of these three modes has been carried out. Findings – The authors find that the PDF display is able to completely present the original PDF document contents and thus obviously superior to the other two displays. In addition, the format specification of PDF-based e-book does not perform well; lack of standardization and complex structure is exposed to the publication. Practical implications – The proposed approach makes viewing the PDF documents more convenient and effective, and can be used to retrieve and visualize the PDF documents and to support the personalized function customization of PDF in the digital library applications. Originality/value – This paper proposes a novel approach to solve the problem between content operation and the view of PDF synchronously, providing users a new tool to retrieve and reuse the PDF documents. It contributes to improve the service specification and policy of viewing the PDF for digital library. Besides, the personalized interface and public index make further development and application more feasible.
APA, Harvard, Vancouver, ISO, and other styles
39

Ammari, Faisal T., and Joan Lu. "Enhanced XML Encryption Using Classification Mining Technique for e-Banking Transactions." International Journal of Information Retrieval Research 3, no. 4 (October 2013): 81–103. http://dx.doi.org/10.4018/ijirr.2013100105.

Full text
Abstract:
In this paper a novel approach is presented for securing financial Extensible Markup Language (XML) transactions using classification data mining (DM) algorithms. The authors' strategy defines the complete process of classifying XML transactions by using set of classification algorithms, classified XML documents processed at later stage using element-wise encryption. Classification algorithms were used to identify the XML transaction rules and factors in order to classify the message content fetching important elements within. The authors have implemented two classification algorithms to fetch the importance level value within each XML document. Classified content is processed using element-wise encryption for selected parts with “High” or “Medium” importance level values. Element-wise encryption is performed using AES symmetric encryption algorithm with different key sizes. An implementation has been conducted using data set fetched from e-banking service in one of the leading banks in Jordan to present system functionality and efficiency. Results from the authors' implementation presented an improvement in processing time encrypting XML documents.
APA, Harvard, Vancouver, ISO, and other styles
40

Klaib, Alhadi A. "XML Dataset and Benchmarks for Performance Testing of the CLS Labelling Scheme." Journal of Pure & Applied Sciences 20, no. 2 (July 15, 2021): 12–15. http://dx.doi.org/10.51984/jopas.v20i2.1243.

Full text
Abstract:
Extensible Markup Language (XML) has become a significant technology for transferring data through the world of the Internet. XML labelling schemes are an essential technique used to handle XML data effectively. Labelling XML data is performed by assigning labels to all nodes in that XML document. CLS labelling scheme is a hybrid labelling scheme that was developed to address some limitations of indexing XML data. Moreover, datasets are used to test XML labelling schemes. There are many XML datasets available nowadays. Some of them are from real life datasets and others are from artificial datasets. These datasets and benchmarks are used for testing the XML labelling schemes. This paper discusses and considers these datasets and benchmarks and their specifications in order to determine the most appropriate one for testing the CLS labelling scheme. This research found out that the XMark benchmark is the most appropriate choice for the testing performance of the CLS labelling scheme.
APA, Harvard, Vancouver, ISO, and other styles
41

Ma, Zongmin, Haitao Cheng, and Li Yan. "Automatic Construction of OWL Ontologies From Petri Nets." International Journal on Semantic Web and Information Systems 15, no. 1 (January 2019): 21–51. http://dx.doi.org/10.4018/ijswis.2019010102.

Full text
Abstract:
Ontology, as a formal representation method of domain knowledge, plays a particular important key role in semantic web. How to construct ontologies has become a key technology in the semantic web, especially constructing ontologies from existing domain knowledge. Currently, Petri nets have been a mathematical modeling tool, and have been widely studied and successfully applied in modeling of software engineering, database and artificial intelligence. In particular, PNML (Petri Net Markup Language) language has been a part of ISO/IEC Petri nets standard for representing and exchanging data on Petri nets. Therefore, how to construct ontologies from PNML model of Petri nets needs to be investigated. In this article, the authors investigate a method for automatic construction of web ontology language (OWL) ontologies from PNML of Petri nets. Firstly, this paper gives a formal definition and the semantics of PNML models of Petri nets. On this basis, a formal approach for constructing OWL ontologies from PNML model of Petri nets is proposed, i.e., this paper transforms Petri nets (including PNML model and PNML document of the Petri nets) into OWL ontologies at both structure and instance levels. Furthermore, the correctness of the transformation is proved. Finally, a prototype construction tool called PN2OWL is developed to transform Petri nets models into OWL ontologies automatically.
APA, Harvard, Vancouver, ISO, and other styles
42

Fakharaldien, Mohammed Adam Ibrahim, Jasni Mohamed Zain, Norrozila Sulaiman, and Tutut Herawan. "XRecursive." International Journal of Information Retrieval Research 1, no. 4 (October 2011): 53–65. http://dx.doi.org/10.4018/ijirr.2011100104.

Full text
Abstract:
Storing XML documents in a relational database is a promising solution because relational databases are mature and scale very well. They have the advantages that in a relational database XML data and structured data can coexist making it possible to build application that involve both kinds of data with little extra effort. This paper proposes an alternative method named Xrecursive for mapping XML (eXtensible Markup Language) documents to RDB (Relational Databases). The Xrecursive method does not need a DTD (Document Text Definition) or XML schema. Further, it can be applied as a general solution for any XML data. The steps and algorithm of Xrecursive are given in details to describe how to use the storing structure to storage and query XML documents in relational database. The authors report their experimental results on a real database, showing that the performance of their Xrecursive algorithm achieves better results in terms of storage size, insertion time, mapping time, and reconstruction time as compared with that SUCXENT and XParent methods. In overall, Xrecursive performs better in term of query performances as compared to the both methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Piernik, Maciej, Dariusz Brzezinski, Tadeusz Morzy, and Anna Lesniewska. "XML clustering: a review of structural approaches." Knowledge Engineering Review 30, no. 3 (October 29, 2014): 297–323. http://dx.doi.org/10.1017/s0269888914000216.

Full text
Abstract:
AbstractWith its presence in data integration, chemistry, biological, and geographic systems, eXtensible Markup Language (XML) has become an important standard not only in computer science. A common problem among the mentioned applications involves structural clustering of XML documents—an issue that has been thoroughly studied and led to the creation of a myriad of approaches. In this paper, we present a comprehensive review of structural XML clustering. First, we provide a basic introduction to the problem and highlight the main challenges in this research area. Subsequently, we divide the problem into three subtasks and discuss the most common document representations, structural similarity measures, and clustering algorithms. In addition, we present the most popular evaluation measures, which can be used to estimate clustering quality. Finally, we analyze and compare 23 state-of-the-art approaches and arrange them in an original taxonomy. By providing an up-to-date analysis of existing structural XML clustering algorithms, we hope to showcase methods suitable for current applications and draw lines of future research.
APA, Harvard, Vancouver, ISO, and other styles
44

Malo, Roman. "Principles of reusability of XML-based enterprise documents." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 58, no. 6 (2010): 295–302. http://dx.doi.org/10.11118/actaun201058060295.

Full text
Abstract:
XML (Extensible Markup Language) represents one of flexible platforms for processing enterprise documents. Its simple syntax and powerful software infrastructure for processing this type of documents is a guarantee for high interoperability of individual documents. XML is today one of technologies influencing all aspects of ICT area.In the paper questions and basic principles of reusing XML-based documents are described in the field of enterprise documents. If we use XML databases or XML data types for storing these types of documents then partial redundancy could be expected due to possible documents’ similarity. This similarity can be found especially in documents’ structure and also in documents’ content and its elimination is necessary part of data optimization.The main idea of the paper is focused to possibilities how to think about dividing complex XML docu­ments into independent fragments that can be used as standalone documents and how to process them.Conclusions could be applied within software tools working with XML-based structured data and documents as document management systems or content management systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Gorban, Oksana. "The Donosheniya and Reports of Don Cossacks in the Mid 18th c.: Source Analysis." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 4. Istorija. Regionovedenie. Mezhdunarodnye otnoshenija, no. 4 (September 2019): 45–59. http://dx.doi.org/10.15688/jvolsu4.2019.4.4.

Full text
Abstract:
Introduction. The study is connected with the issues of electronic corpora of historical sources and diachronic linguistic corpora and is based on office documents of the 18th c. reserved in the Mikhailovsky Stanitsa Ataman archive fund (State Archive of Volgograd region). Methods and materials. The article considers donosheniya and reports as the main documents that were submitted from lower to higher authorities and had “donoshenie” (message, report) and “report” (report) designation. Solving source meta-markup problems the author examines the origin and meaning of words “donoshenie” and “report”, analyzes the content and functions of the documents, their text format and verbal formulas representing the components of the form. Analysis. The paper shows that the words have different origin, but common semantics reflecting the document function. Word “report” entered the Russian language in the early 18th century as a synonym for the original “donoshenie”. Initially, relevant documents were not distinguished, but gradually they were differentiated not only by name. Donosheniya can come from military men and civilians often contain a message and request. Reports are used mainly between host and stanitsa atamans, other military officials and are mostly informative documents. Reports are also often used as accompanying documents for other documents. The text format of donoshenie and report has similarities, but can be represented by different verbal formulas. As an accompany document report has a more concise and simple structure. Results. The author concludes that including donoshenie and report to the created source corpus as independent documents is necessary for faciliting their search and more exact representation of using language units. Standard speech markers can be applied for automatic recognition of texts as well.
APA, Harvard, Vancouver, ISO, and other styles
46

Nagni, Maurizio, and Spiros Ventouras. "Implementation of UML Schema in Relational Databases." International Journal of Distributed Systems and Technologies 4, no. 4 (October 2013): 50–60. http://dx.doi.org/10.4018/ijdst.2013100105.

Full text
Abstract:
Numerous disciplines require information concerning phenomena implicitly or explicitly associated with a location relative to the Earth. Disciplines using Geographic Information (GI) in particular are those within the earth and physical sciences, and increasingly those within social science and medical fields. Therefore geographic datasets are increasingly being shared, exchanged and frequently re-purposed for uses beyond their original intended use. Being part of the ISO 19100 Geographic Information Standard series, the ISO 19136 called Geography Markup Language (GML), defines the rules a data model described using the Unified Modeling Language (UML) has to follow in order to generate from it an XSD schema. However, if GML is essential for exchange data among different organization, it may not be the best option for persisting or searching operations. On the other side, the Relational Database Model (RDBM) has been heavily optimized over the decades to store and search data. This paper does not address “How to store an GML complaint document in an RDBM” but “How to realize an RDBM from an ISO 19100 complaint UML data model” and within this context, it describes the experience and the lessons learnt. The conclusions show how the information contained in such UML is able to produce not only representations as GML schema, but also RDBM or RDF without passing by any intermediary step.
APA, Harvard, Vancouver, ISO, and other styles
47

K. Mendez, Patina, Ralph W. Holzenthal, and Joshua W. H. Steiner. "The Trichoptera Literature Database: a collaborative bibliographic resource for world caddisfly research." Zoosymposia 5, no. 1 (June 10, 2011): 331–37. http://dx.doi.org/10.11646/zoosymposia.5.1.25.

Full text
Abstract:
In addition to a list of valid names and synonyms, as provided by the Trichoptera World Checklist, access to the primary literature itself is essential for research in Trichoptera taxonomy and systematics. To improve access to bibliographic information, we established the Trichoptera Literature Database, http://www.trichopteralit.umn.edu, a bibliographic database of over 8,500 citations of literature on Trichoptera. In addition to compiling bibliographical information, we provided access to over 450 high quality Portable Document Format files (PDFs) of historically important, rare, or out-of-print older works as well as more current literature. To provide universal web access to this bibliographical resource, we constructed a dynamic, custom-designed, web application (PHP, Symfony framework) created to import Extensible Markup Language (XML) from the EndNote data file. The database allows the user to search by author and year of publication, displays citations in a standard bibliographic format, and provides download links to available PDF literature. Existing bibliographies of Trichoptera literature and online access to Zoological Record databases were used to accumulate citations. Protocols for scanning literature, issues regarding copyright, and procedures for uploading citations and PDFs to the database are established. We hope to create a collaborative framework of contributors by seeking regional, subject, or language organizers from the community of Trichoptera workers to assist in completing and maintaining this resource with the goal of lowering barriers to efficient access to taxonomic information.
APA, Harvard, Vancouver, ISO, and other styles
48

Ugwu, Chimezie F., Henry O. Osuagwu, and Donatus I. Bayem. "Intranet-Based Wiki with Instant Messaging Protocol." European Journal of Electrical Engineering and Computer Science 5, no. 4 (July 18, 2021): 10–19. http://dx.doi.org/10.24018/ejece.2021.5.4.340.

Full text
Abstract:
This research developed an Intranet-Based Wiki with Instant Messaging Protocol (IBWIMP) for the Staff of the Department of Computer Science, University of Nigeria, Nsukka to enable them to collaborate on tasks; like writing of documents like, memo, project guidelines, proposal/grants, and circulars with online security consideration. The essence of this work is to improve on the contributions staff make during work, in carrying out tasks with their colleagues irrespective of the person’s location at that point in time. The existing system on ground would always require the presence of the staff within the department before they can carry out the tasks meant for them or respond to the available mails within their mail boxes located within the general office of the department. In regard to such situation, there is always delay in the processing of such mails or documents that could require urgent attention of the supposed staff therefore could cause serious damages. This research established a better internet connection for the security of the system and the documents therein by the use of virtual private server (VPS) hosting on virtual private network (VPN). This system allows the collaboration between the staff of the department and external persons, or partners classified as external staff user on documents like circulars that normally come from outside the department. This system automatically sends emails to the supposed users whenever the admin posts a document via the Short Message Transfer Protocol (SMTP). The system is accessed online by the users from any location once there is accessible internet connection and users can collaborate on the development of any posted document at the same time. This application was designed using Object Oriented Analysis and Design Methodology (OOADM) and implemented using Hypertext Markup Language (HTML), JavaScript, Cascading Style Sheet (CSS), CkEditor, Hypertext Pre-Processor (PHP) and MySQL database management system.
APA, Harvard, Vancouver, ISO, and other styles
49

Jeganathan, Vishnu, Linda Rautela, Simon Conti, Krisha Saravanan, Alyssa Rigoni, Marnie Graco, Liam M. Hannan, Mark E. Howard, and David J. Berlowitz. "Typical within and between person variability in non-invasive ventilator derived variables among clinically stable, long-term users." BMJ Open Respiratory Research 8, no. 1 (March 2021): e000824. http://dx.doi.org/10.1136/bmjresp-2020-000824.

Full text
Abstract:
BackgroundDespite increasing capacity to remotely monitor non-invasive ventilation (NIV), how remote data varies from day to day and person to person is poorly described.MethodsSingle-centre, 2-month, prospective study of clinically stable adults on long-term NIV which aimed to document NIV-device variability. Participants were switched to a ventilator with tele-monitoring capabilities. Ventilation settings and masking were not altered. Raw, extensible markup language data files were provided directly from Philips Respironics (EncoreAnywhere). A nested analysis of variance was conducted on each ventilator variable to apportion the relative variation between and within participants.ResultsTwenty-nine people were recruited (four withdrew, one had insufficient data for analyses; 1364 days of data). Mean age was 54.0 years (SD 18.4), 58.3% male with body mass index of 37.0 kg/m2 (13.7). Mean adherence was 8.53 (2.23) hours/day and all participants had adherence >4 hours/day. Variance in ventilator-derived indices was predominantly driven by differences between participants; usage (61% between vs 39% within), Apnoea–Hypopnoea Index (71% vs 29%), unintentional (64% vs 36%) and total leak (83% vs 17%), tidal volume (93% vs 7%), minute ventilation (92% vs 8%), respiratory rate (92% vs 8%) and percentage of triggered breaths (93% vs 7%).InterpretationIn this clinically stable cohort, all device-derived indices were more varied between users than the day-to-day variation within individuals. We speculate that normative ranges and thresholds for clinical intervention need to be individualised, and further research is necessary to determine the clinically important relationships between clinician targets for therapy and patient-reported outcomes.
APA, Harvard, Vancouver, ISO, and other styles
50

Yi, Myongho. "Exploring the quality of government open data." Electronic Library 37, no. 1 (February 4, 2019): 35–48. http://dx.doi.org/10.1108/el-06-2018-0124.

Full text
Abstract:
Purpose The use of “open data” can help the public find value in various areas of interests. Many governments have created and published a huge amount of open data; however, people have a hard time using open data because of data quality issues. The UK, the USA and Korea have created and published open data; however, the rate of open data implementation and level of open data impact is very low because of data quality issues like incompatible data formats and incomplete data. This study aims to compare the statuses of data quality from open government sites in the UK, the USA and Korea and also present guidelines for publishing data format and enhancing data completeness. Design/methodology/approach This study uses statistical analysis of different data formats and examination of data completeness to explore key issues of data quality in open government data. Findings Findings show that the USA and the UK have published more than 50 per cent of open data in level one. Korea has published 52.8 per cent of data in level three. Level one data are not machine-readable; therefore, users have a hard time using them. The level one data are found in portable document format and hyper text markup language (HTML) and are locked up in documents; therefore, machines cannot extract out the data. Findings show that incomplete data are existing in all three governments’ open data. Originality/value Governments should investigate data incompleteness of all open data and correct incomplete data of the most used data. Governments can find the most used data easily by monitoring data sets that have been downloaded most frequently over a certain period.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography