Siga este enlace para ver otros tipos de publicaciones sobre el tema: HTML (Document markup language).

Artículos de revistas sobre el tema "HTML (Document markup language)"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "HTML (Document markup language)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Haghish, E. F. "Markdoc: Literate Programming in Stata". Stata Journal: Promoting communications on statistics and Stata 16, n.º 4 (diciembre de 2016): 964–88. http://dx.doi.org/10.1177/1536867x1601600409.

Texto completo
Resumen
Rigorous documentation of the analysis plan, procedure, and computer codes enhances the comprehensibility and transparency of data analysis. Documentation is particularly critical when the codes and data are meant to be publicly shared and examined by the scientific community to evaluate the analysis or adapt the results. The popular approach for documenting computer codes is known as literate programming, which requires preparing a trilingual script file that includes a programming language for running the data analysis, a human language for documentation, and a markup language for typesetting the document. In this article, I introduce markdoc, a software package for interactive literate programming and generating dynamic-analysis documents in Stata. markdoc recognizes Mark-down, LATEX, and HTML markup languages and can export documents in several formats, such as PDF, Microsoft Office .docx, OpenOffice and LibreOffice .odt, LATEX, HTML, ePub, and Markdown.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

White, Jason. "Using Markup Languages for Accessible Scientific, Technical, and Scholarly Document Creation". Journal of Science Education for Students with Disabilities 25, n.º 1 (15 de diciembre de 2022): 1–22. http://dx.doi.org/10.14448/jsesd.14.0005.

Texto completo
Resumen
In using software to write a scientific, technical, or other scholarly document, authors have essentially two options. They can either write it in a ‘what you see is what you get’ (WYSIWYG) editor such as a word processor, or write it in a text editor using a markup language such as HTML, LaTeX, Markdown, or AsciiDoc. This paper gives an overview of the latter approach, focusing on both the non-visual accessibility of the writing process, and that of the documents produced. Currently popular markup languages and established tools associated with them are introduced. Support for mathematical notation is considered. In addition, domain-specific programming languages for constructing various types of diagrams can be well integrated into the document production process. These languages offer interesting potential to facilitate the non-visual creation of graphical content, while raising insufficiently explored research questions. The flexibility with which documents written in current markup languages can be converted to different output formats is emphasized. These formats include HTML, EPUB, and PDF, as well as file formats used by contemporary word processors. Such conversion facilities can serve as means of enhancing the accessibility of a document both for the author (during the editing and proofreading process) and for those among the document’s recipients who use assistive technologies, such as screen readers and screen magnifiers. Current developments associated with markup languages and the accessibility of scientific or technical documents are described. The paper concludes with general commentary, together with a summary of opportunities for further research and software development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Power, Richard, Donia Scott y Nadjet Bouayad-Agha. "Document Structure". Computational Linguistics 29, n.º 2 (junio de 2003): 211–60. http://dx.doi.org/10.1162/089120103322145315.

Texto completo
Resumen
We argue the case for abstract document structure as a separate descriptive level in the analysis and generation of written texts. The purpose of this representation is to mediate between the message of a text (i.e., its discourse structure) and its physical presentation (i.e., its organization into graphical constituents like sections, paragraphs, sentences, bulleted lists, figures, and footnotes). Abstract document structure can be seen as an extension of Nunberg's “text-grammar” it is also closely related to “logical” markup in languages like HTML and LaTEX. We show that by using this intermediate representation, several subtasks in language generation and language understanding can be defined more cleanly.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hussein Toman, Sarah. "THE DESIGN OF A TEMPLATING LANGUAGE TO EMBED DATABASE QUERIES INTO DOCUMENTS". Journal of Education College Wasit University 1, n.º 29 (16 de enero de 2018): 512–34. http://dx.doi.org/10.31185/eduj.vol1.iss29.168.

Texto completo
Resumen
Presenting information from a database to a human readership is one of the usual tasks in software development. Commonly, an imperative language (such as: PHP, C#, Java, etc.) is used to query a database system and populate with the desired information the application's GUI, a web page or a printed report (referred from now on as Presentation Media). Virtually all database systems are now capable to format, sort and group the data stored in a database, and last but not least to perform calculations against it. These are most of the time enough to prepare the information that is going to be shown on screen or paper. Thus it leaves just one role for the imperative code: to glue the query results to the Presentation Media. This code tends to become repetitive and grows proportionally with the complexity of the Presentation Media. The need for software developers to write this imperative code can be eliminated thought. Instead, the markup code (HTML, LaTEX, etc) can have the ability to bind its elements directly to the database system. To achieve this ability, I propose mixing the Presentation Media’s markup code with a Templating Language. This paper elaborates the design of a Templating Language, a declarative language that adds annotations to any markup code regarding what data will be queried and how should it be integrated in document. For this markup code to be consumed, there won't be necessary to implement any database query abilities in the process that renderers it. Instead, a preprocessor will be invoked to interpret the Templating Language, connect to the database system and query the desired data, respectively to generate the final markup code.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Situmorang, Erwin Samuel, F. V. Astrolabe Sian Prasetya y Annafi Franz. "Geographic Information System for Fishermen's Market Kutai Kartanegara Marine and Fishery Service". Journal of Geomatics Engineering, Technology, and Science 1, n.º 2 (5 de marzo de 2023): 60–65. http://dx.doi.org/10.51967/gets.v1i2.22.

Texto completo
Resumen
Activities to process various kinds of data regarding data management are always endeavored to be ready to be presented to anyone who needs it. Geographic Information System (GIS) is an information system designed to work with spatially referenced data or geographic coordinates in other words GIS is a database system with special capabilities to handle spatially referenced data, together with a set of work operations. This study uses two methods of data collection, namely: The interview method (interview) and Document Study. The interview is a data collection method that is carried out by seeking information or data directly through sources so that the data obtained can be more accurate while conducting document studies, in document studies researchers rely on documents as one source of data to support research. The Geographic Information System of the Fishermen Market of the Kutai Kartanegara Marine and Fisheries Service was created with several programming languages Hypert Text Markup Language (HTML). The development of the system used in this study is the Waterfall model, which starts with the stages of analysis, design, coding, testing, and maintenance. The Geographic Information System of the Fisherman Market of the Kutai Kartanegara Marine and Fisheries Service aims to make it easier for service members to input fish prices and make it easier for the public to see fish prices and locations from the market.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Firdian, Maulana Irfan, Eko Darwiyanto y Monterico Adrian. "Web Scraping with HTML DOM Method for Website News API creation". JIPI (Jurnal Ilmiah Penelitian dan Pembelajaran Informatika) 7, n.º 4 (15 de noviembre de 2022): 1211–19. http://dx.doi.org/10.29100/jipi.v7i4.3235.

Texto completo
Resumen
Information is one of the important things in this era, one of the information that always exists every day is news. The amount of news that appears every day becomes a new problem when news websites do not provide API (Application Programming Interface) services to get the news. This is an obstacle for researchers who will analyze news topics. The copy and paste method is less effective in getting news every day on news websites because it takes a long time. In this research, web scraping is done with the HTML (Hypertext Markup Language) DOM (Document Object Model) method to retrieve data from news sites. The results of web scraping are in the form of datasets which are then entered into the database and made into an API. The API that has been created is tested using black box testing and testing the suitability of the data, between the data obtained when scraping and the data on the news website at the time of testing. The results of testing using black box testing show that the filters on the API created run according to their functions and get a high percentage of data conformity. The Tribunnews.com news website has a conformity rate of 99.2%, Detik.com of 97.9% and Li-putan6.com of 98.6%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sun, Wu y David Bernstein. "XML-Based Transit Timetable System". Transportation Research Record: Journal of the Transportation Research Board 1804, n.º 1 (enero de 2002): 151–61. http://dx.doi.org/10.3141/1804-20.

Texto completo
Resumen
The Internet and the World Wide Web are used for a wide variety of transportation applications. Most of these applications use static HTML documents. However, outside of transportation, considerably more attention is being given to dynamic content and XML. A way in which these technologies can be used to provide transit timetable information on the web is explored. Specifically, the transit timetable system, an online interactive transit timetable information exchange and administration system that uses Java server pages and the transit timetable markup language, is described.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Rhodus, Tim. "Publishing Newsletters on the World Wide Web Using Database Software". HortScience 31, n.º 4 (agosto de 1996): 588e—588. http://dx.doi.org/10.21273/hortsci.31.4.588e.

Texto completo
Resumen
Preparing newsletters for distribution over the World Wide Web generally requires one to learn HTML (hypertext markup language), purchase an HTML editor, or convert existing wordprocessing documents through a utility program. As an alternative, an input form was developed for county agents that facilitates the direct publishing of their weekly Buckeye Yard and Garden On-line newsletter over the Internet. Using FileMaker Pro 3.0 for Macintosh and the ROFM acgi script for WebSTAR, agents cut and paste text from their word processing file into specific input boxes on the screen and then submit it to the server located in Columbus. Their newsletter articles are then made available to anyone on the Web through a searchable database that allows for searching by date or title. Preparation of the input form and corresponding search form creates two distinct advantages: county agents do not have to spend time learning about HTML coding and all their newsletters are indexed in a searchable database with no additional effort by the site manager. Modification of this procedure has been done to facilitate the creation of online term projects for students and a directory for horticultural internships.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ugwu, Chimezie F., Henry O. Osuagwu y Donatus I. Bayem. "Intranet-Based Wiki with Instant Messaging Protocol". European Journal of Electrical Engineering and Computer Science 5, n.º 4 (18 de julio de 2021): 10–19. http://dx.doi.org/10.24018/ejece.2021.5.4.340.

Texto completo
Resumen
This research developed an Intranet-Based Wiki with Instant Messaging Protocol (IBWIMP) for the Staff of the Department of Computer Science, University of Nigeria, Nsukka to enable them to collaborate on tasks; like writing of documents like, memo, project guidelines, proposal/grants, and circulars with online security consideration. The essence of this work is to improve on the contributions staff make during work, in carrying out tasks with their colleagues irrespective of the person’s location at that point in time. The existing system on ground would always require the presence of the staff within the department before they can carry out the tasks meant for them or respond to the available mails within their mail boxes located within the general office of the department. In regard to such situation, there is always delay in the processing of such mails or documents that could require urgent attention of the supposed staff therefore could cause serious damages. This research established a better internet connection for the security of the system and the documents therein by the use of virtual private server (VPS) hosting on virtual private network (VPN). This system allows the collaboration between the staff of the department and external persons, or partners classified as external staff user on documents like circulars that normally come from outside the department. This system automatically sends emails to the supposed users whenever the admin posts a document via the Short Message Transfer Protocol (SMTP). The system is accessed online by the users from any location once there is accessible internet connection and users can collaborate on the development of any posted document at the same time. This application was designed using Object Oriented Analysis and Design Methodology (OOADM) and implemented using Hypertext Markup Language (HTML), JavaScript, Cascading Style Sheet (CSS), CkEditor, Hypertext Pre-Processor (PHP) and MySQL database management system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Liu, Xin. "Wireless Network Communication in the XML Metadata Storage of Wushu Historical Archives". Wireless Communications and Mobile Computing 2021 (8 de noviembre de 2021): 1–13. http://dx.doi.org/10.1155/2021/5171713.

Texto completo
Resumen
The wireless communication network has a huge promotion effect on data processing. As an important standard for internet data transmission, XML markup language is currently widely used in the internet. The rapid development of XML has brought fresh blood to the research of database. This article is aimed at studying the application of wireless network communication in the storage of the XML metadata database of Wushu Historical Archives. One part uses data load-balancing algorithms and two-way path constraint algorithms to conduct research on the metadata storage of XML databases, such as document size, document loading, and document query. Combining wireless technology and comparing with SGML and HTML, it highlights the flexibility and importance of XML database. The other part analyzes the XML database of Wushu Historical Archives itself and uses questionnaire surveys combined with the experience of 30 users randomly checked to provide constructive suggestions for the construction of XML database, such as strengthening the operation page design, text, and graphics. The experimental results show that the time consumed by the XML database in the case of bidirectional path indexing is 0.37 s, and the time consumed in the case of unidirectional path indexing is 0.27 s. This data shows that the application of wireless network communication technology has played an important role in the easy update and flexibility of the XML database. The data load capacity presented by the two path indexes is inconsistent; the current experience of XML database mainly focuses on the copy of the database quantity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Maulana, M. Rizqi. "Detection of Attacks on Apache2 Web Server Using Genetic Algorithm Based On Jaro Winkler Algorithm". JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 4, n.º 1 (20 de julio de 2020): 185–92. http://dx.doi.org/10.31289/jite.v4i1.3873.

Texto completo
Resumen
Web server is software that provides data services in the form of HTTP (Hypertext Transfer Protocol) requests and responses in the form of HTML documents (Hypertext Markup Language) with the aim of managing data in the form of text files, images, videos and files. But in managing large amounts of data, good security monitoring is needed so that the data stored on the web server is not easily hacked. To protect the web server from hackers need an application to detect activities that are considered suspicious or possible hacking activities. By utilizing logs from a web server that is processed using the Jaro Winkler algorithm to see hacking attempts that produce a matrix and hacking activity reports to the admin. Thus the web server admin can see suspicious activity on the web server directly.Keywords: Web Server, Jaro Winkler Algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Shivhare, Kratika. "ResumeCraft: A Machine Learning-powered Web Platform for Resume Building". International Journal for Research in Applied Science and Engineering Technology 12, n.º 5 (31 de mayo de 2024): 1154–66. http://dx.doi.org/10.22214/ijraset.2024.61731.

Texto completo
Resumen
Abstract: The competitive job market necessitates well-crafted resumes that resonate with both human recruiters and Applicant Tracking Systems (ATS). This paper introduces ResumeCraft, a web-based platform empowering users to build strong resumes and optimize them for ATS compatibility. ResumeCraft leverages Machine Learning (ML) for data analysis and user guidance, while the user interface is built with Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), and JavaScript for a user-friendly experience. The system allows users to input their personal and professional details through a series of form fields, and provides a real-time preview of the resume design as the user inputs their data. The resume generator uses JavaScript to dynamically populate the preview with the user's input, and allows users to select from a range of pre-designed templates and color schemes to customize the look and feel of their resume. It processes the user input and generates a downloadable Portable Document Format (PDF) of the final resume. The platform analyzes user-provided information through ML models, offering suggestions on skill extraction, keyword matching, and action verb usage. This combination empowers users to create impactful resumes that are more likely to pass through ATS filters and reach human reviewers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhang, Xiao. "Intranet Web System, a Simple Solution to Companywide Information-on-demand". Proceedings, annual meeting, Electron Microscopy Society of America 54 (11 de agosto de 1996): 404–5. http://dx.doi.org/10.1017/s0424820100164489.

Texto completo
Resumen
Intranet, a private corporate network that mirrors the internet Web structure, is the new internal communication technology being embraced by more than 50% of large US companies. Within the intranet, computers using Web-server software store and manage documents built on the Web’s hypertext markup language (HTML) format. This emerging technology allows disparate computer systems within companies to “speak” to one another using the Internet’s TCP/IP protocol. A “fire wall” allows internal users Internet access, but denies external intruders intranet access. As industrial microscopists, how can we take advantage of this new technology? This paper is a preliminary summary of our recent progress in building an intranet Web system among GE microscopy labs. Applications and future development are also discussed.The intranet Web system is an inexpensive yet powerful alternative to other forms of internal communication. It can greatly improve communications, unlock hidden information, and transform an organization. The intranet microscopy Web system was built on the existing GE corporate-wide Ethernet link running Internet’s TCP/IP protocol (Fig. 1). Netscape Navigator was selected as the Web browser. Web’s HTML documentation was generated by using Microsoft® Internet Assistant software. Each microscopy lab has its own home page. The microscopy Web system is also an integrated part of GE Plastics analytical technology Web system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

LIU, CHIEN-HUNG, DAVID C. KUNG, PEI HSIA y CHIH-TUNG HSU. "AN OBJECT-BASED DATA FLOW TESTING APPROACH FOR WEB APPLICATIONS". International Journal of Software Engineering and Knowledge Engineering 11, n.º 02 (abril de 2001): 157–79. http://dx.doi.org/10.1142/s0218194001000499.

Texto completo
Resumen
In recent years, Web applications have grown rapidly because of their abilities to provide online information access to anyone at anytime around the world. As Web applications become complex, there is a growing concern about their quality and reliability. This paper extends traditional data flow testing techniques to Web applications. Several data flow issues about analyzing HyperText Markup Language (HTML) documents in Web applications are discussed. An object-based data flow testing approach is presented. The approach is based on a test model that captures data flow test artifacts of Web applications. In the test model, each entity of a Web application is modeled as an object. The data flow information of the functions within an object or across objects is then captured using various flow graphs. Based on the object-based test model, data flow test cases for a Web application can be systematically and selectively generated in five different levels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Yi, Myongho. "Exploring the quality of government open data". Electronic Library 37, n.º 1 (4 de febrero de 2019): 35–48. http://dx.doi.org/10.1108/el-06-2018-0124.

Texto completo
Resumen
Purpose The use of “open data” can help the public find value in various areas of interests. Many governments have created and published a huge amount of open data; however, people have a hard time using open data because of data quality issues. The UK, the USA and Korea have created and published open data; however, the rate of open data implementation and level of open data impact is very low because of data quality issues like incompatible data formats and incomplete data. This study aims to compare the statuses of data quality from open government sites in the UK, the USA and Korea and also present guidelines for publishing data format and enhancing data completeness. Design/methodology/approach This study uses statistical analysis of different data formats and examination of data completeness to explore key issues of data quality in open government data. Findings Findings show that the USA and the UK have published more than 50 per cent of open data in level one. Korea has published 52.8 per cent of data in level three. Level one data are not machine-readable; therefore, users have a hard time using them. The level one data are found in portable document format and hyper text markup language (HTML) and are locked up in documents; therefore, machines cannot extract out the data. Findings show that incomplete data are existing in all three governments’ open data. Originality/value Governments should investigate data incompleteness of all open data and correct incomplete data of the most used data. Governments can find the most used data easily by monitoring data sets that have been downloaded most frequently over a certain period.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Herlambang, Admaja Dwi y Satrio Hadi Wijoyo. "Algoritma Naive Bayes untuk Klasifikasi Sumber Belajar Berbasis Teks pada Mata Pelajaran Produktif di SMK Rumpun Teknologi Informasi dan Komunikasi". Jurnal Teknologi Informasi dan Ilmu Komputer 6, n.º 4 (15 de julio de 2019): 430. http://dx.doi.org/10.25126/jtiik.2019641323.

Texto completo
Resumen
<p>Salah satu komponen esensial dalam kegiatan pembelajaran di Sekolah Menengah Kejuruan Rumpun Teknologi Informasi dan Komunikasi (SMK TIK) adalah ketersediaan sumber belajar mata pelajaran produktif. Media internet atau online adalah sumber belajar yang berbentuk media elektronik yang dapat dimanfaatkan oleh siswa dan guru melalui jaringan internet. Salah satu bentuk media online adalah halaman web berformat .html (Hypertext Markup Language) yang berupa dokumen teks sangatlah banyak. Sehingga sumber belajar tersebut perlu di kelompokkan berdasarkan kriteria atau ciri esensial setiap mata pelajaran produktif di SMK TIK. Proses pengelompokkan menggunakan algoritma Naive Bayes karena algoritma tersebut dapat digunakan untuk dokumen teks dan menggunakan teorema Bayes dengan menganggap semua atribut saling tidak berhubungan. Tujuan penelitian ini adalah untuk mendeskripsikan hasil klasifikasi dan evaluasi kualitas klasifikasi sumber belajar berbasis teks dengan menggunakan Algoritma Naïve Bayes. Tahapan penelitian yang dilakukan adalah pengoleksian data set, pemrosesan awal dengan text mining, pembobotan Tf-Idf, pengklasifikasian Naïve Bayes, dan evaluasi akurasi. Pengklasifikasian teks menghasilkan sembilan kelompok mata pelajaran produktif dan pengujian menghasilkan nilai akurasi tertinggi 81,48%, sedangkan nilai akurasi terendah sebesar 79,63%.</p><p> </p><p><strong>Abstract</strong></p><p>The availability of learning resources for productive subjects is one of the essential components in learning activities for Vocational High Schools, especially for Information and Communication Technology competence field. Internet or online media is a learning resource in the form of electronic media that can be used by students and teachers through the internet. One form of online media is a web page formatted in .html (Hypertext Markup Language) in the form of very many text documents. So that learning resources need to be grouped based on the essential criteria or characteristics of each productive subject in Vocational High Schools. The grouping process uses the Naive Bayes algorithm because the algorithm can be used to text documents and use the Bayes theorem by assuming all attributes are mutually unrelated. The purpose of the study was to describe the results of the classification and classification quality evaluation of text-based learning sources using the Naïve Bayes Algorithm. The stages of the research carried out are collecting data sets, pre-processing with text mining, Tf-Idf weighting, Naïve Bayes classifying, and accuracy evaluation. Text classification results shows that there are nine productive subject groups and based on uji results shows that the highest accuracy value was 81,48%, while the lowest accuracy value was 79,63%.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Wadzani A. Gadzama, Ogah U. S. y Ibrahim Bashir Tukur. "Web-based statement of result verification system for Federal Polytechnic, Mubi". World Journal of Advanced Research and Reviews 15, n.º 3 (30 de septiembre de 2022): 386–93. http://dx.doi.org/10.30574/wjarr.2022.15.3.0937.

Texto completo
Resumen
Over the years there has been a continuous request by Organizations Academic Institutions, Recruiters and Employers to verify graduated students' statements of results from Federal Polytechnic, Mubi. This occurs as a result of the delay in issuance of Certificates which are expected to be ready after three years of Student graduation as it is obtainable by other Institutions in the country. This issue is causing Organizations, Institutions and the Polytechnic time and resources over the years. This paper aims to design an online result verification system that will provide easy, fast and real-time verification of student Statements of the result. This system of verifying students' results will reduce the level of result forgery, ease the stress on the Polytechnic and reduce verification time which is done by physical document verification. The current system requires an employer or anybody concerned to send a representative to the Exams and Record Unit of the Polytechnic to verify a particular statement of result which takes a long time and is not cost-effective. This paper designed and implemented a proposed Web Base Statement of Result Verification System using Hypertext Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript, Hypertext Preprocessor (PHP), MySQL Database and Windows, Apache, MySQL, and PHP (WAMP) Server. Unified Modeling Language (UML) Diagrams and System Architecture were designed for the proposed system. The proposed system was designed to process student results with an embedded Quick Response (QR) code and a Result Verification Code (RVC) which will help organizations to verify students’ statements of results in real-time. This system could be integrated into an already existing institution's official portal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Zhezherun, Oleksandr y Maksym Ryepkin. "Automatic Generation of Ontologies Based on Articles Written in Ukrainian Language". NaUKMA Research Papers. Computer Science 5 (24 de febrero de 2023): 12–15. http://dx.doi.org/10.18523/2617-3808.2022.5.12-15.

Texto completo
Resumen
The article presents a system capable of generating new ontologies or supplementing existing ones based on articles in Ukrainian. Ontologies are described and an algorithm suitable for automated concept extraction from natural language texts is presented.Ontology as a technology has become an increasingly important topic in contemporary research. Since the creation of the Semantic Web, ontology has become a solution to many problems of understanding natural language by computers. If an ontology existed and was used to analyze documents, then we would have systems that could answer very complex queries in natural language. Google’s success showed that loading HTML pages is much easier than marking everything with semantic markup, wasting human intellectual resources. To find a solution to this problem, a new direction in the ontological field, called ontological engineering, has appeared. This direction began to study ways of automating the generation of knowledge, which would be consolidated by an ontology from the text.Humanity generates more data every day than yesterday. One of the main levers today in the choice of technologies for the implementation of new projects is whether it can cope with this flow of data, which will increase every day. Because of this, some technologies come to the fore, such as machine learning, while others recede to the periphery, due to the impossibility or lack of time to adapt to modern needs, as happened with ontologies. The main reason for the decrease in the popularity of ontologies was the need to hire experts for its construction and the lack of methods for automated construction of ontologies.This article considers the problem of automated ontology generation using articles from the Ukrainian Wikipedia, and geometry was taken as an example of the subject area. A system was built that collects data, analyzes it, and forms an ontology from it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Yasin, Syed Ahmed y P. V. R. D. Prasada Rao. "Enhanced CRNN-Based Optimal Web Page Classification and Improved Tunicate Swarm Algorithm-Based Re-Ranking". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 30, n.º 05 (octubre de 2022): 813–46. http://dx.doi.org/10.1142/s0218488522500246.

Texto completo
Resumen
The main intention of this paper is to develop a new intelligent framework for web page classification and re-ranking. The two main phases of the proposed model are (a) classification, and (b) re-ranking-based retrieval. In the classification phase, pre-processing is initially performed, which follows the steps like HTML (Hyper Text Markup Language) tag removal, punctuation marks removal, stop words removal, and stemming. After pre-processing, word to vector formation is done and then, feature extraction is performed by Principle Component Analysis (PCA). From this, optimal feature selection is accomplished, which is the important process for the accurate classification of web pages. Web pages contain several features, which reduces the classification accuracy. Here, the adoption of a new meta-heuristic algorithm termed Opposition based-Tunicate Swarm Algorithm (O-TSA) is employed to perform the optimal feature selection. Finally, the selected features are subjected to the Enhanced Convolutional-Recurrent Neural Network (E-CRNN) for accurate web page classification with enhancement based on O-TSA. The outcome of this phase is the categorization of different web page classes. In the second phase, the re-ranking is involved utilizing the O-TSA, which derives the objective function based on similarity function (correlation) for URL matching, which results in optimal re-ranking of web pages for retrieval. Thus, the proposed method yields better classification and re-ranking performance and reduce space requirements and search time in the web documents compared with the existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Schwickert, Axel C. "HTML - Hypertext Markup Language". Informatik-Spektrum 20, n.º 3 (20 de junio de 1997): 168–69. http://dx.doi.org/10.1007/s002870050065.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Kitaev, Evgeny L’vovich y Rimma Yuryevna Skornyakova. "Leveraging Semantic Markups for Incorporating External Resources Data to the Content of a Web Page". Russian Digital Libraries Journal 23, n.º 3 (9 de mayo de 2020): 494–513. http://dx.doi.org/10.26907/1562-5419-2020-23-3-494-513.

Texto completo
Resumen
The semantic markups of the World Wide Web have accumulated a large amount of data and their number continues to grow. However, the potential of these data is, in our opinion, not fully utilized. The semantic markups contents are widely used by search systems, partly by social networks, but the usual approach to using that data by application developers is based on converting data to RDF standard and executing SPARQL queries, which requires good knowledge of this language and programming skills. In this paper, we propose to leverage the semantic markups available on the Web to automatically incorporate their contents to the content of other web pages. We also present a software tool for implementing such incorporation that does not require a web page developer to have knowledge of any programming languages ​​other than HTML and CSS. The developed tool does not require installation, the work is performed by JavaScript plugins. Currently, the tool supports semantic data contained in the popular types of semantic markups “microdata” and JSON-LD, in the tags of HTML documents and the properties of Word and PDF documents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yusuf F, Akhmad, Ilyas Nuryasin y Zamah Sari. "Optimasi Kecepatan Loading Time Web Template Dengan Implementasi Teknik Front-End". Jurnal Repositor 2, n.º 11 (4 de diciembre de 2020): 1456. http://dx.doi.org/10.22219/repositor.v2i11.746.

Texto completo
Resumen
AbstrakSitus web merupakan sekumpulan dokumen Hypertext Markup Language (HTML) statis yang dibangun untuk memudahkan setiap orang berbagi informasi, selama terhubung ke dalam jaringan internet. Salah satu bagian dari sistem sebuah situs web adalah web template. Web template adalah komponen dasar dari sistem web template berguna untuk memudahkan pengembang web merancang ulang sebuah halaman web. Salah satu yang mempengaruhi kinerja halaman web yaitu loading time, dimana loading time adalah waktu yang dibutuhkan oleh browser agar dapat menampilkan halaman web secara menyeluruh oleh pengguna ketika pengguna melakukan request, selain itu loading time merupakan salah satu bagian penting dari optimasi situs web. Optimasi merupakan suatu proses dimana memodifikasi atau merubah sesuatu yang telah ada agar efektifitasnya meningkat. Dalam sebuah situs web, terdapat beberapa konsep dalam optimasi, yaitu First Paint, Time To Interactivity (TTI), First Meaningful Paint (FMP) dan Long Task. Berdasarkan penelitian-penelitian yang sudah ada, diketahui bahwa optimasi loading time web dapat dilakukan dari sisi front-end. Oleh karena itu pada penelitian ini melakukan teknik optimasi dengan menggunakan critical rendering path, above the fold, priority resource, bundle and minify, gzip, dan splitting code. Hasil dari peforma web berdasarkan metriks first meaningful paint (FMP), first contetful paint (FCP), dan time to interactivity (TTI) mengalami peningkatan rata-rata kecepatan (perosentase) yaitu FMP sebesar 73%, FCP sebesar 60%, TTI sebesar 50%, dan loading time sebesar 29%. Selanjutnya, pada resource file rata-rata ukuran file menurun sebesar 59% dan jumlah request file menurun sebesar 21%.AbstractWebsite is a collection of HTML documents that are built to make it easy for everyone to share information, as long as they are connected to the internet. One part of a website system is a web template. Web templates are the basic components of a web template system useful for making it easy for web developers to redesign a web page. One that affects the performance of web pages is loading time, where loading time is the time needed by the browser to be able to display the web page as a whole by the user when the user makes a request, besides that loading time is one important part of website optimization. Optimization is a process where modifying or changing something that already exists in order to increase its effectiveness. In a website, there are several concepts in optimization, namely First Paint, Time To Interactivity, and First Meaningful Paint Based on existing research, it is known that web loading time optimization can be done from the front-end side. Therefore, in this study, optimization techniques using critical rendering path, above the fold, priority resources, bundle and minify, gzip, and splitting code. The implementation of matrics first meaningful paint (FMP), first contetful paint (FCP), dan time to interactivity (TTI) make increase average of speed FMP as 73%, FCP as 60%, TTI as 50%, and loading time as 29%. And then average of resource file decrease as 59% and total file of request decrease as 21%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Silva Sánchez, Gastón. "Conociendo Html 5". Journal Boliviano de Ciencias 11, n.º 34 (30 de agosto de 2015): 37–40. http://dx.doi.org/10.52428/20758944.v11i34.699.

Texto completo
Resumen
HTML5 (HyperText Markup Language, versión 5) es la quinta revisión del lenguaje de la World Wide Web, HTML, que se publicó en octubre de 2014; trayendo consigo mejoras e innovaciones que permitirán a los desarrolladores la implementación de soluciones informáticas basadas en la web. Con características más avanzadas similares a las aplicaciones tradicionales de escritorio, debido a que permite la incrustación de video y sonido, además, manejar de gráficos vectoriales, todo esto sin la necesidad de instalar plugins o software adicional, se constituye en una alternativa interesante para el desarrollo de nuevas aplicaciones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hidayat, Wahyu y Nugroho Arif Sudibyo. "Penerapan Multimedia Pembelajaran Interaktif Elektronika dengan Framework RAD (Rapid Application Development) Menggunakan HTML". Jurnal Sains dan Edukasi Sains 1, n.º 2 (30 de agosto de 2018): 17–24. http://dx.doi.org/10.24246/juses.v1i2p17-24.

Texto completo
Resumen
Pengembangan Multimedia menggunakan teknologi dapat dikembangkan di dunia pendidikan. Multimedia yang dikembangkan sekarang untuk dunia pendidikan adalah elektronic learning (e-learning). Dengan tujuan ingin memberikan pengalaman belajar yang menarik tentang mata kuliah materi elektronika peneliti melakukan rancang bangun multimedia e-learning menggunakan HTML. HTML (Hypertext Markup Language) bahasa yang digunakan untuk membuat halaman web. Bahasa ini berfungsi untuk menampilkan berbagai jenis informasi dengan melakukan markup pada dokumen sebelumnya menjadi format standar ASCII Metode yang dilakukan adalah menggunakan metode Rapid Application Development (RAD). Metode ini dilaksanakan menggunakan 4 tahap yaitu, tahapan Perencanaan kebutuhan, Perancangan jalannya penelitian (Pembangunan Sistem), penerapan penelitian rancang bangun, dan tahap terakhir adalah evaluasi penelitian. Hasil dari penelitian ini adalah multimedia dengan menggunakan bahasa HTML yang mampu dijangkau pembelajar. Media ini dapat digunakan oleh pembelajar maupun pengajar untuk lebih menarik pembelajar belajar elektronika
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Ilik, Violeta, Jessica Storlien y Joseph Olivarez. "Metadata Makeover". Library Resources & Technical Services 58, n.º 3 (23 de julio de 2014): 187. http://dx.doi.org/10.5860/lrts.58n3.187.

Texto completo
Resumen
Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Budiarso, Zuly, Eddy Nurraharjo, Yohanes Suhari y Jati Sasongko Wibowo. "PELATIHAN BAHASA HYPER TEXT MARKUP LANGUAGE (HTML) BAGI KOMUNITAS TANIA ONLINE SEMARANG". Jurnal Pengabdian Masyarakat Intimas (Jurnal INTIMAS): Inovasi Teknologi Informasi Dan Komputer Untuk Masyarakat 1, n.º 1 (10 de agosto de 2021): 36–40. http://dx.doi.org/10.35315/intimas.v1i1.8520.

Texto completo
Resumen
Blog dan website saat ini banyak digunakan untuk keperluan perusahaan dan perorangan untuk keperluan bisnis dan eksistensi. Anggota Komunitas Tania Online Semarang saat ini dituntut untuk bisa tetap eksis dan bersaing dalam era teknologi informasi. Namun kemampuan anggota Komunitas Tania Online Semarang dalam hal pembuatan blog belum didukung dengan materi seperti Hyper Text Markup Language (HTML). Oleh karena itu, komunitas ini membutuhkan bantuan agar anggotanya mampu memahami HTML agar dapat membuat dan memodifikasi blog secara spesifik.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Almeida, Maurício Barcellos. "Uma introdução ao XML, sua utilização na Internet e alguns conceitos complementares". Ciência da Informação 31, n.º 2 (agosto de 2002): 5–13. http://dx.doi.org/10.1590/s0100-19652002000200001.

Texto completo
Resumen
O HTML <FONT FACE=Symbol>-</FONT> Hypertext Markup Language <FONT FACE=Symbol>-</FONT> é uma linguagem de marcação*, inicialmente concebida como uma solução para a publicação de documentos científicos em meios eletrônicos, que ganhou popularidade e se tornou padrão para a Internet. Diversos tipos de aplicações, como navegadores, editores, programas de e-mail, bancos de dados etc., tornam possível atualmente o uso intensivo do HTML. Ao longo dos anos, recursos têm sido adicionados ao HTML para que ele possa atender às expectativas de usuários e sistemas computadorizados, aumentando sua complexidade. Estima-se que a versão 4.0 do HTML possua aproximadamente cem diferentes marcações fixas (conhecidas como tags), sem contar aquelas específicas para cada tipo de navegador da Internet. É comum se encontrarem páginas HTML que possuem mais marcações do que conteúdo. Uma possível solução para novas demandas nessa área é a utilização do Extended Markup Language (XML), uma linguagem de marcação que pode introduzir novas possibilidades e trazer melhor integração entre dados e usuários. Este artigo se propõe a abordar, de forma introdutória, o XML, sua utilização na Internet, alguns conceitos complementares necessários ao entendimento do assunto em apresentar vantagens no uso do XML, em relação ao HTML. Além disso, pretende apresentar o assunto como um campo fértil para discussões, proposições e estudo por profissionais da ciência da informação.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Saadawi, Gilan M. y James H. Harrison. "Definition of an XML Markup Language for Clinical Laboratory Procedures and Comparison with Generic XML Markup". Clinical Chemistry 52, n.º 10 (1 de octubre de 2006): 1943–51. http://dx.doi.org/10.1373/clinchem.2006.071449.

Texto completo
Resumen
Abstract Background: Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Methods: Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. Results: CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. Conclusion: In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Rahmatika, Rahmatika, Ulfa Pauziah y Hardian Mursito. "HTML-Based Website Learning Training (Hypertext Markup Languange)". REKA ELKOMIKA: Jurnal Pengabdian kepada Masyarakat 2, n.º 1 (5 de julio de 2021): 19–25. http://dx.doi.org/10.26760/rekaelkomika.v2i1.19-25.

Texto completo
Resumen
In today's era of globalization, websites are very familiar, where many people easily use the web or website. Some can make their own and some ask for it to be made or rather buy. Just use it and then we just need to use it. Our partners are neighborhood associations 05 and 06 which are located in Perumnas Depok Timur, which have young women and men who are curious about website development, and some young women and men want to continue their studies at a higher level by majoring in information and communication. In making a website, they need several stages that they have to apply because our partners are teenagers who are unfamiliar with website creation, for that we here want to make a basic training in website creation which will later be useful for them in terms of application in their environment and society. Where in this training introduced HTML (hypertext markup language). It is hoped that by studying HTML, teenagers can understand how to create a website with HTML.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Akers, Katherine G. "Correction to “Satellite stories: capturing professional experiences of academic health sciences librarians working in delocalized health sciences programs” on page 80, 106(1) January. DOI: http://dx.doi.org/10.5195/jmla.2018.214". Journal of the Medical Library Association 106, n.º 2 (5 de abril de 2018): 280. http://dx.doi.org/10.5195/jmla.2018.459.

Texto completo
Resumen
Corrects author photos in the hypertext markup language (HTML) version of “Satellite stories: capturing professional experiences of academic health sciences librarians working in delocalized health sciences programs” on page 80, 106(1) January. DOI: http://dx.doi.org/10.5195/jmla.2018.214.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

CHIU, DICKSON K. W., DANNY KOK, ALEX K. C. LEE y S. C. CHEUNG. "INTEGRATING LEGACY SITES INTO WEB SERVICES WITH WEBXCRIPT". International Journal of Cooperative Information Systems 14, n.º 01 (marzo de 2005): 25–44. http://dx.doi.org/10.1142/s0218843005001006.

Texto completo
Resumen
Despite the recent uprising of the Web Services technology for programmatic interfaces of business-to-business (B2B) E-commerce services (e-services) over the Internet, most existing sites can only support human interactions with Hypertext Markup Language (HTML) through web browsers. Automating third-party client access into Web Services generally requires developing sophisticated programs to simulate human access by handling HTML pages. However, these HTML interfaces vary across web sites, and are often subject to changes. Client maintenance is therefore tedious and expensive. Even for the site owner, it may still require much effort in redeveloping the underlying presentation and application logics. This motivates our study for the requirement and the formulation of a conceptual model for such automation. Based on the requirement, we develop a novel approach to automating dialogs with web-based services (particularly for cross-organizational processes), using a high-level script language, called WebXcript language. The language provides features for HTML forms-based dialogues and eXtended Markup Language (XML) messaging. The XML syntax of WebXcript further enables convenient user authoring and easy engine development with extensively available XML tools. It supports expected responses and exception handling. We further propose a wrapper architecture based on WebXcript to integrate legacy sites into Web Services, where Web Service Definition Language (WSDL) interfaces are generated from high-level mappings from database or WebXcript parameter definitions. We demonstrate the applicability of our approach with examples in integrating distributed information, online ordering, and XML messaging, together with discussions on our experiences and the advantages of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Liao, Tony. "Standards and Their (Recurring) Stories: How Augmented Reality Markup Language Was Built on Stories of Past Standards". Science, Technology, & Human Values 45, n.º 4 (4 de agosto de 2019): 712–37. http://dx.doi.org/10.1177/0162243919867417.

Texto completo
Resumen
This article focuses on the role of past standards stories and how they are deployed strategically in ways that shape the process of standards creation. It draws upon an ethnographic study over multiple years of standards meetings, discussions, and online activity. Building on existing work that examines how standards are shaped by stories, this study follows the development of Augmented Reality Markup Language and maps how the story of Hypertext Markup Language (HTML) became the key story that actors utilized and debated to push for participation, agreement, and material development of the standard. The authors present several different ways the recurring HTML story was effective at various points in the process as a diagnostic tool, promissory future, empirical evidence, and confidence building measure. Understanding these strategic deployments serves as an empirical example of how recurring stories of the past can shape standards development. These mappings illustrate how standards can be built on past standards sociologically as well as technologically and also broadens our theoretical tools for understanding the importance of stories in the sociology of standards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Hill, Leslie. "Deus ex Machina: Navigating between the Lines". New Theatre Quarterly 14, n.º 53 (febrero de 1998): 48–52. http://dx.doi.org/10.1017/s0266464x00011726.

Texto completo
Resumen
GOOD EVENING. Tonight I've been asked to talk to you about HTML – hypertext markup language – and its performative characteristics; its multimedia capacity; its non-linear structure; its interactive possibilities; its real-time relationship with its readers slash navigators slash audience; and its potential interest to writers cum artists cum performers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Merino Sánchez, Mtro Héctor y Lic Cynthia Palomino Alarcón. "La CPU-e, Revista de Investigación Educativa en SciELO". CPU-e, Revista de Investigación Educativa, n.º 25 (23 de abril de 2018): 1–4. http://dx.doi.org/10.25009/cpue.v0i25.2534.

Texto completo
Resumen
El Instituto de Investigaciones en Educación de la Universidad Veracruzana se precia de tener una añeja tradición editorial, a la cual se buscó dar a un giro innovador en 2005, transformando la Colección Pedagógica Universitaria, con 30 años de trayectoria, en una nueva publicación cien por ciento electrónica, la CPU-e, Revista de Investigación Educativa.Con el fin de consolidar la CPU-e, desde sus primeros números se decidió apegarse a criterios editoriales que le otorgaran un sello de calidad y le permitieran tener acceso a índices, bases de datos y directorios que contribuyeran a su visibilidad. La meta original fue cumplir con los lineamientos de Latindex, pues a nuestro juicio, la claridad y los alcances de sus normas eran la guía idónea para cubrir los estándares indispensables en la presentación de los artículos académicos por publicar. De esta forma integramos a la revista elementos que ahora nos parecen evidentes pero que, en su momento, ignorábamos. Mediante esta dinámica de explorar los requisitos de inclusión de diversos índices y bases de datos, fuimos enriqueciendo el perfil de la CPU-e, al hacerlos propios mediante su adaptación a nuestras necesidades institucionales. Por lo que sucesivamente postulamos la revista y fue aceptada en: Latindex (catálogo y directorio), DOAJ, Redalyc, IRESIE y Dialnet.El avance más reciente fue la inclusión de la CPU-e en SciElo. Así como en el momento de su fundación fue la primera revista digital de la UV, ahora es la primera en ingresar a este índice, el de mayor fortaleza a nivel iberoamericano.La relevancia de SciELO radica en que, más allá de ofrecer a los lectores revistas a texto completo, recopila y proporciona datos bibliométricos que permiten medir el impacto de los artículos publicados en la región. El sistema SciElo ha ido evolucionando con el objeto de optimizar la recuperación de la información de cada uno de los elementos que integran un documento.En 2014, SciELO dio inicio a la adopción del estándar XML-JATS como esquema para las publicaciones que alberga. Tras un periodo de transición, en el que se abandonó el anterior esquema basado en HTML, el equipo de Scielo-México está llevando a cabo una extensa capacitación para editores, con el fin de descentralizar las tareas que conlleva implementar el nuevo modelo. En nuestro caso, asumir la responsabilidad de preparar los artículos y llevar a cabo su marcación bajo el nuevo estándar era una condición sine qua non para que la CPU-e fuera aceptada en este índice, una vez que ya se contaba con los requisitos académicos y editoriales que establecen sus normas.La riqueza del XML (eXtensible Markup Language) consiste en que se trata de un metalenguaje de marcación con el cual se asignan etiquetas para identificar los datos de un documento; a diferencia del HTML, cuyas etiquetas están enfocadas en la presentación de los datos, el XML está enfocado en el contenido. Por ejemplo, en HTML el título de un artículo estaría etiquetado así:<H1>Discapacidad y educación superior</H1>lo que indicaría que este texto debe desplegarse centrado, con un puntaje mayor y en negritas, pero sin dar información sobre el tipo de contenido de que se trata. En cambio, en el XML se marcaría como:<doctitle>Discapacidad y educación superior</doctitle>de tal suerte que los sistemas que lleguen a recabar la información de este archivo identificarán el fragmento como el título principal del artículo. La norma JATS (Journal Article Tag Suite) es la que define la estructura de un artículo en XML y los componentes que lo integran. De esta forma se pueden marcar todos y cada uno de los elementos bibliográficos que contiene un documento, haciendo posible su identificación inmediata y sin ambigüedades.Esta capacidad del XML es la que lo hace sumamente valioso para preservar y difundir artículos académicos, pues se suma a otra característica no menos importante, la interoperabilidad, que permite que este tipo de archivos sean leídos por una amplia variedad de dispositivos y sistemas operativos. A partir del XML también se pueden generar distintos formatos de archivo: PDF, ePUB y el mismo HTML, ahora enriquecido con nuevos metadatos.El nivel de detalle con que se realiza la marcación de cada uno de los elementos del artículo alcanza su punto más elaborado con la lista de referencias, como puede observarse a continuación:[ref id="r6" reftype="book"][authors role="nd"][pauthor][surname]Browne[/surname], [fname]M. W.[/fname][/pauthor], & [pauthor][surname]Cudeck[/surname], [fname]R.[/fname][/pauthor][/authors] ([date dateiso="19930000" specyear="1993"]1993[/date]). [chptitle]Alternative ways of assessing model fit[/chptitle]. En [authors role="ed"][pauthor][fname]K.[/fname] [surname]Bollen[/surname][/pauthor] & [pauthor][fname]J.[/fname] [surname]Long[/surname][/pauthor][/authors] (Eds.), [source]Testing structural equation models[/source] (pp. [pages]136-162[/pages]). [publoc]Estados Unidos de América[/publoc]: [pubname]Sage[/pubname].[/ref]Así, en el ejemplo anterior, se especifica el tipo de fuente (book); se señalan las partes del nombre de los autores (fname, surname) y su tipo de rol (ed); la fecha se estandariza (dateiso); se marca el título del capítulo (chptitle) y el de la fuente de origen (source); las páginas que comprende el capítulo (pages); el lugar de publicación (publoc) y el editor (pubname).Para realizar la marcación de los artículos fue necesario recibir capacitación por parte del equipo de SciELO. Aunque recién en mayo de 2017 se puso en línea el primer número de la CPU-e en el portal de este índice, el proceso comenzó en noviembre de 2014. El Dr. Antonio Sánchez Pereyra, coordinador de SciELO-México, aprobó el ingreso de la revista en aquel entonces, y nos puso bajo la tutela de la Lic. Patricia Garrido Villegas, quien nos impartió los conocimientos pertinentes y nos proporcionó el software necesario para concretar la tarea. Debido a que llegamos justo en el momento de la transición, primero fuimos instruidos en el marcaje de archivos HTML y luego, en el de XML, lo que de forma inevitable dilató el proceso. El apoyo del Dr. Sánchez y la Lic. Garrido ha sido constante y muy cercano, mostrando un interés entusiasta por ver incluida nuestra revista en su plataforma, en tanto representamos, de alguna manera, esta nueva vertiente de editores autónomos que buscan forjar.Estamos conscientes de que apenas hemos dado los primeros pasos y que la tarea por venir es ardua, pues además de los números recientes que ya están en preparación, quedan por marcar los de los primeros 10 años. Pese a que la UV ha atravesado por momentos sumamente complejos en los años recientes, lo que ha dificultado que se destinen los insumos que se requieren para desempeñar de manera óptima las múltiples tareas académicas que se desarrollan en nuestra casa de estudios, el ánimo no disminuye. Día con día refrendamos las convicciones que dieron origen a la CPU-e y el compromiso que hemos signado con nuestros lectores y autores por mantener y elevar nuestros estándares editoriales.Héctor Merino y Cynthia PalominoEditores
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Hucka, Michael, Frank T. Bergmann, Stefan Hoops, Sarah M. Keating, Sven Sahle, James C. Schaff, Lucian P. Smith y Darren J. Wilkinson. "The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core". Journal of Integrative Bioinformatics 12, n.º 2 (1 de junio de 2015): 382–549. http://dx.doi.org/10.1515/jib-2015-266.

Texto completo
Resumen
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Bleeker, J. "Standard Generalized Markup Language (SGML)". Toegepaste Taalwetenschap in Artikelen 28 (1 de enero de 1987): 154–81. http://dx.doi.org/10.1075/ttwia.28.14ble.

Texto completo
Resumen
The traditional way of creating and typesetting a manuscript hampers the necessary modernization of the production process and particularly the dissemination and accessibility of information. This is caused by the use of wordprocessing packages and the nature of typesetting instructions. Both wordprocessing codes and typesetting codes contain insufficient information, because they only aim at a single presentation of the text. Scientific publications, however, can be distributed in many different forms: on paper, in all possible layouts; in whole or in part via electronic means such as floppy disks, compact disks, datacommunication, etc. In addition information should be accessible from many points of view. New electronic tools (i.e. micro-computers) and databases with advanced search software have the technical possibilities for this. The Standard Generalized Markup Language, the new ISO-standard, is a method of recording texts in such a way that the afore-mentioned can be achieved. This method has two basic principles: 1. the descriptors of texts (called SGML-tags) must be based on content and not on form. 2. the SGML-tags used for description of texts must be defined in a document description. This is based on the principle that texts are structured, i.e. independent of its purpose. It enables one to describe the elements of which a text consists, the order they have to be in the text, and whether they are optional or obligatory, and/or repetitive. The description of the content of texts makes it possible to create conversions (via software) to a diversity of printed and electronic forms (distribution). It is also possible to search databases for e.g. an article about a certain subject or written by an author of a special institute or a university (information retrieval).
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Narjis Mezaal Shati y Ali Jassim Mohamed Ali. "Hiding Any Data File Format into Wave Cover". journal of the college of basic education 16, n.º 69 (31 de octubre de 2019): 1–10. http://dx.doi.org/10.35950/cbej.v16i69.4739.

Texto completo
Resumen
In the current study a steganography approach utilized to hide various data file format in wave files cover. Lest significant bit insertion (LSB) used to embedding a regular computer files (such as graphic, execution file (exe), sound, text, hyper text markup language (HTML) …etc) in a wave file with 2-bits hiding rates. The test results achieved good performance to hide any data file in wave file.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Chu, Josey Y. M., William L. Palya y Donald E. Walter. "Creating a hypertext markup language document for an information server". Behavior Research Methods, Instruments, & Computers 27, n.º 2 (junio de 1995): 200–205. http://dx.doi.org/10.3758/bf03204732.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Dos Santos, Marcio Carneiro. "Métodos digitais e a memória acessada por APIs: desenvolvimento de ferramenta para extração de dados de portais jornalísticos a partir da WayBack Machine". Revista Observatório 1, n.º 2 (8 de diciembre de 2015): 23. http://dx.doi.org/10.20873/uft.2447-4266.2015v1n2p23.

Texto completo
Resumen
Explora-se a possibilidade de automação da coleta de dados em sites, a partir da aplicação de código construído em linguagem de programação Python, utilizando a sintaxe específica do HTML (HiperText Markup Language) para localizar e extrair elementos de interesse como links, texto e imagens. A coleta automatizada de dados, também conhecida como raspagem (scraping) é um recurso cada vez mais comum no jornalismo. A partir do acesso ao repositório digital do site www.web.archive.org, também conhecido como WayBackMachine, desenvolvemos a prova de conceito de um algoritmo capaz de recuperar, listar e oferecer ferramentas básicas de análise sobre dados coletados a partir das diversas versões de portais jornalísticos ao longo do tempo. PALAVRAS-CHAVE: Raspagem de dados. Python Jornalismo Digital. HTML. Memória. ABSTRACTWe explore the possibility of automation of data collection from web pages, using the application of customized code built in Python programming language, with specific HTML syntax (Hypertext Markup Language) to locate and extract elements of interest as links, text and images. The automated data collection, also known as scraping is an increasingly common feature in journalism. From the access to the digital repository site www.web.archive.org, also known as WayBackMachine, we develop a proof of concept of an algorithm able to recover, list and offer basic tools of analysis of data collected from the various versions of newspaper portals in time series.KEYWORDS: Scraping. Python. Digital Journalism. HTML. Memory. RESUMENSe explora la posibilidad de automatización de los sitios de recolección de datos, desde el código de aplicación construida en lenguaje de programación Python, utilizando la sintaxis específica de HTML (Hypertext Markup Language) para localizar y extraer elementos de interés, tales como enlaces, texto e imágenes. La colección de datos automatizada, también conocido como el raspado es una característica cada vez más común en el periodismo. Desde el acceso a la www.web.archive.org, sitio de repositorio digital, también conocida como WayBackMachine, desarrollamos una prueba de concepto de un algoritmo para recuperar, listar y ofrecer herramientas básicas de análisis de los datos recogidos de las diferentes versiones de portales de periódicos en el tiempo. PALABRAS CLAVE: Raspar datos. Python. Periodismo digital. HTML. Memoria. ReferênciasBIRD, Steven; LOPER, Edward; KLEIN, Ewan. Natural Language Processing with Python: analyzing text with the Natural Language Toolkit. New York: O'Reilly Media Inc., 2009.BONACICH, Phillip; LU, Phillip. Introduction to mathematical sociology. New Jersey: Princeton University Press, 2012.BRADSHAW, Paul. Scraping for Journalists. Leanpub, 2014, [E-book].GLEICK, James. A Informação. Uma história, uma teoria, uma enxurrada. São Paulo, Companhia das Letras, 2013.MANOVICH, Lev. The Language of New Media. Cambrige: Mit Press, 2001.MORETTI, Franco. Graphs, maps, trees. Abstract models for literary history. New York, Verso, 2007.ROGERS, Richard. Digital Methods. Cambridge: Mit Press, 2013. E-book.SANTOS, Márcio. Conversando com uma API: um estudo exploratório sobre TV social a partir da relação entre o twitter e a programação da televisão. Revista Geminis, ano 4 n. 1, p. 89-107, São Carlos. 2013. Disponível em: . Acesso em: 20 abr. 2013.SANTOS, Márcio. Textos gerados por software. Surge um novo gênero jornalístico. Anais XXXVII Congresso Brasileiro de Ciências da Comunicação. Foz do Iguaçu, 2014. Disponível em: . Acesso em 26 jan. 2014. Disponível em:Url: http://opendepot.org/2682/ Abrir em (para melhor visualização em dispositivos móveis - Formato Flipbooks):Issuu / Calameo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Manggopa, Hiskia Kamang, Christine Takarina Meitty Manoppo, Peggy Veronica Togas, Alfrina Mewengkang y Johan Reimon Batmetan. "Web-Based Learning Media Using Hypertext Markup Language as Course Materials". Jurnal Pendidikan Teknologi dan Kejuruan 25, n.º 1 (10 de abril de 2019): 116–23. http://dx.doi.org/10.21831/jptk.v25i1.23469.

Texto completo
Resumen
Recently, numerous web-based applications with attractive user interface, brand-new and advanced features, and friendly operation are rapidly developed and widely used, particularly in education context. This study aims to produce web-based learning media that are feasible for delivering materials of web programming subject. The course material of this subject is Hypertext Markup Language (HTML).This study is located at Manado State University, at the study program of information and communication technology. The method of this study employs the research and development model. The data were obtained by four questionnaires, including (1) the feasibility testing by the lecturer, (2) the media expert feasibility testing, (3) the materials expert feasibility testing, and (4) the students’ feasibility testing. Based on the experimental results, it can be stated that the developed web learning is considered useful. It can be concluded that the web-based learning media is feasible to be used and effectively improve the quality of learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Fedorchuk, A., O. Usata y O. Nakonechna. "WEB DESIGN AND WEB PROGRAMMING IN THE MODERN INTERNET WORLD". Municipal economy of cities 6, n.º 180 (4 de diciembre de 2023): 12–20. http://dx.doi.org/10.33042/2522-1809-2023-6-180-12-20.

Texto completo
Resumen
Modern web content development capabilities are undergoing characteristic changes due to the improvement of web technologies in programming. The origin of web technologies began in the early 80s of the last century when the Hypertext Transfer Protocol (HTTP) and the Hypertext Markup Language (HTML) made a significant contribution to the development of Internet capabilities. The web evolution, characterised by scientific and technological progress in information technology, has opened up new opportunities for the development of web programming, which has led to the emergence of programming languages, frameworks, cascading style sheets, and hypertext markup languages. This research aims to study and analyse the key aspects of web programming and web design with a combination of modern features of the Internet environment. To analyse web design, the article considers the possibilities of using the Bootstrap framework, which combines and contains the HTML5 hypertext markup language, CSS3 cascading style sheets, and the JavaScript programming language. Having analysed two web development technologies—front-end and back-end—we can conclude that the presentation level (front-end development) corresponds to the client side, with which the client can interact and where dynamic elements can be added to an HTML page using JavaScript with the visual definition of a web page using CSS. In the case of back-end development, the application layer corresponds to the server side, with which the client cannot interact, as the shell is hidden. Using the modern capabilities of the programming language, its code, and a set of auxiliary tools in website development, the paper presents a development algorithm where each stage contains separate processes and operations during website development, which makes it impossible for them to exist independently. Thus, tools for designing and laying out websites using modern technologies can greatly simplify the development process, which can create higher-quality web content. Keywords: Web, web environment, web technologies, front-end, framework, Bootstrap, CSS3, HTML5, CCS3, JavaScript.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Nuriev, Marat Gumerovich, Elena Semenovna Belashova y Konstantin Alekseevich Barabash. "Markdown File Converter to LaTeX Document". Программные системы и вычислительные методы, n.º 1 (enero de 2023): 1–12. http://dx.doi.org/10.7256/2454-0714.2023.1.39547.

Texto completo
Resumen
Common text editors such as Microsoft Word, Notepad++ and others are cumbersome. Despite their enormous functionality, they do not eliminate the risk of incorrectly converting the document, for example, when opening the same Word files on older or, conversely, newer versions of Microsoft Word. The way out is the use of markup languages, which allow you to mark up text blocks in order to present them in the desired style. Currently, very popular are LaTeX (a set of macro-extensions of the TeX typesetting system) and Markdown (a lightweight markup language, designed to denote formatting in plain text). So the question of converting a Markdown document into a LaTeX document is relevant. There are various tools to convert Markdown files to LaTeX document, such as Pandoc library, Markdown.lua, Lunamark and others. But most of them have redundant steps to generate the output document. This paper highlights a solution method by integrating a Markdown file into a LaTeX document, which will potentially reduce the output document generation time unlike existing solutions. The developed Markdown to LaTeX document converter will automatically generate the output document and reduce the possibility of errors when manually converting text from Markdown format to LaTeX format.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Luterbach, Kenneth J. y Diane Rodriguez. "Practicing Pronunciation: Will Voice XML do for language learners what HTML did for collaborators?" EuroCALL Review 11 (15 de marzo de 2007): 11. http://dx.doi.org/10.4995/eurocall.2007.16368.

Texto completo
Resumen
This paper considers the utility of the Voice Extensible Markup Language (Voice XML) for language learning. In particular, this article considers whether Voice XML might become as popular as HTML. First, this paper discusses the surprising popularity of HTML, which provides contextual information useful for considering the potential of Voice XML. Second, this article discusses two voice scripts in order to demonstrate Voice XML tags and features. The first example script concerns voice synthesis only whereas the second script utilizes both voice synthesis and voice recognition. In order to gain insight into the utility of Voice XML for instructional applications, the second voice script can be accessed by language learners in order to practice pronouncing words in English. Technically, each voice script is a text file containing Voice XML tags. Once the file containing a Voice XML script is stored on a web server and a telephone number linked to the file, a language learner can use a telephone to practice pronouncing words. Those implementation details are considered in the third section of this paper, which identifies one particular system that permits developers to test and deploy Voice XML scripts free of charge. Lastly, this article concludes with a discussion of issues concerning the utility of Voice XML relative to HTML.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Tin Yuen, Lok, Yue Wefield Lee y Sau Mui Lau. "From unstructured HTML to structured XML: how XML supports financial knowledge management on the Internet". Library Hi Tech 19, n.º 3 (1 de septiembre de 2001): 242–56. http://dx.doi.org/10.1108/eum0000000005887.

Texto completo
Resumen
Reports the benefits of using extensible markup language (XML) to support knowledge management of financial information. Current search engines cannot provide sufficient performance to support users of financial information, which includes both non‐structured items and well‐structured items. For investors, making a high‐quality decision sometimes requires both. XML can help by providing tags to create structure. XML provides a vendor‐neutral approach. XML authors can create arbitrary tags to describe the format or structure of data, and are not restricted to the tags in the specification for HTML. A prototype XML‐based Electronic Financial Filing System (ELFFS‐XML) has been developed to illustrate how to apply XML to model and add value to traditional HTML‐based financial information by cross‐linking related information from different data sources. Compares the functionality of XML‐based ELFFS with the original HTML‐based ELFFS and SEDAR, an electronic filing system used in Canada, and recommends some directions for future development of similar electronic filing systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Mitrevski, Blagoj, Tiziano Piccardi y Robert West. "WikiHist.html: English Wikipedia's Full Revision History in HTML Format". Proceedings of the International AAAI Conference on Web and Social Media 14 (26 de mayo de 2020): 878–84. http://dx.doi.org/10.1609/icwsm.v14i1.7353.

Texto completo
Resumen
Wikipedia is written in the wikitext markup language. When serving content, the MediaWiki software that powers Wikipedia parses wikitext to HTML, thereby inserting additional content by expanding macros (templates and modules). Hence, researchers who intend to analyze Wikipedia as seen by its readers should work with HTML, rather than wikitext. Since Wikipedia's revision history is publicly available exclusively in wikitext format, researchers have had to produce HTML themselves, typically by using Wikipedia's REST API for ad-hoc wikitext-to-HTML parsing. This approach, however, (1) does not scale to very large amounts of data and (2) does not correctly expand macros in historical article revisions. We solve these problems by developing a parallelized architecture for parsing massive amounts of wikitext using local instances of MediaWiki, enhanced with the capacity of correct historical macro expansion. By deploying our system, we produce and release WikiHist.html, English Wikipedia's full revision history in HTML format. We highlight the advantages of WikiHist.html over raw wikitext in an empirical analysis of Wikipedia's hyperlinks, showing that over half of the wiki links present in HTML are missing from raw wikitext, and that the missing links are important for user navigation. Data and code are publicly available at https://doi.org/10.5281/zenodo.3605388.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Hussein ‬‏, ‪Angham Khalid. "Chat network study and design using HTML and PHP web programming". Indonesian Journal of Electrical Engineering and Computer Science 27, n.º 1 (1 de julio de 2022): 442. http://dx.doi.org/10.11591/ijeecs.v27.i1.pp442-446.

Texto completo
Resumen
Chat rooms becoming a part of human life, from sharing and exchanging information such as texts, pictures and messages. Many begin to share the latest news and images related to news in the media field as well as chat messaging in the Internet and targeting customers in business, jokes, music and video in the entertainment field. In this paper a chat web site are designed using Hyper Text Markup language (HTML) and Personal Home Page (PHP) web programming languages with security and authentication features added to it to keep user privacy and personal information unattached, and Apache http server are used to test it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Vacharaskunee, Sutheetutt y Sarun Intakosum. "A Method of Recommendation the Most Used XML Tags". Advanced Materials Research 931-932 (mayo de 2014): 1353–59. http://dx.doi.org/10.4028/www.scientific.net/amr.931-932.1353.

Texto completo
Resumen
Processing of a large data set which is known for today as big data processing is still a problem that has not yet a well-defined solution. The data can be both structured and unstructured. For the structured part, eXtensible Markup Language (XML) is a major tool that freely allows document owners to describe and organize their data using their markup tags. One major problem, however, behind this freedom lies in the big data retrieving process. The same or similar information that are described using the different tags or different structures may not be retrieved if the query statements contains different keywords to the one used in the markup tags. The best way to solve this problem is to specify a standard set of the markup tags for each problem domain. The creation of such a standard set if done manually requires a lot of hard work and is a time consuming process. In addition, it may be hard to define terms that are acceptable by all people. This research proposes a model for a new technique, XML Tag Recommendation (XTR) that aims to solve this problem. This technique applies the idea of Case Base Reasoning (CBR) by collecting the most used tags in each domain as a case. These tags come from the collection of related words in WordNet. The WordCount that is the web site to find the frequency of words is applied to choose the most used one. The input (problem) to the XTR system is an XML document contains the tags specified by the document owners. The solution is a set of the recommended tags, which is the most used tags, for the problem domain of the document. Document owners have a freedom to change or not change the tags in their documents and can provide feedback to the XTR system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Newman, Steven E. "Incorporating Hypertext Applications into Horticulture Educational Programs". HortScience 30, n.º 4 (julio de 1995): 909F—909. http://dx.doi.org/10.21273/hortsci.30.4.909f.

Texto completo
Resumen
Hypertext applications have grown from highlighted index referencing tools used in “help” windows to sophisticated file sharing between many computers linked via the World Wide Web (WWW). Software such as Mosaic makes this link easy and convenient by using “Hypertext Markup Language” (HTML). Most universities and many companies have installed WWW servers and have provided disk space for general use. Horticulture departments and many botanical gardens across the country and all over the world are adapting to this technology by providing access to extension information sheets, newsletters, and selected manuscripts. Pesticide chemical manufacturers are also establishing WWW servers with the intent on providing rapid access to pesticide labels and material safety data sheets (MSDS). For local classroom use, HTML using a WWW server can provide an innovative and alternative means for delivering lecture material.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Sun, Ying, Jing Chen y Jian Song. "Research on Medical Information Cross-Regional Integration Scheme". Applied Mechanics and Materials 496-500 (enero de 2014): 2182–87. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.2182.

Texto completo
Resumen
Cross-regional medical information sharing is a research hotspot in the field of current regional health informatization. This paper put forward a system architecture based on IHE-XDS (Integrating Healthcare Enterprise-Cross Enterprise Document Sharing) technology framework, meeting the requirements of ebXML (Electronic Business using eXtensible Markup Language) and distributed access to patient medical documents, and completed the document registration inquiry services and achieved document sharing by studying the mapping relationships between IHE-XDS and ebXML information models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Kahn, C. E. "A Generalized Language for Platform-Independent Structured Reporting". Methods of Information in Medicine 36, n.º 03 (julio de 1997): 163–71. http://dx.doi.org/10.1055/s-0038-1636826.

Texto completo
Resumen
Structured reporting systems allow health-care workers to record observations using predetermined data elements and formats. The author developed the Data-entry and Reporting Markup Language (DRML) to provide a generalized representational language for describing concepts to be included in structured reporting applications. DRML is based on the Standard Generalized Markup Language (SGML), an internationally accepted standard for document interchange. The use of DRML is demonstrated with the SPIDER system, which uses public-domain Internet technology for structured data entry and reporting. SPIDER uses DRML documents to create structured data-entry forms, outline-format textual reports, and datasets for analysis of aggregate results. Applications of DRML include its use in radiology results reporting and a health status questionnaire. DRML allows system designers to create a wide variety of clinical reporting applications and survey instruments, and helps overcome some of the limitations seen in earlier structured reporting systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía